This GigaOm Research Reprint Expires October 2, 2026
The image is a slide from a presentation about cloud infrastructure and management. The slide specifically focuses on "Scale-out Storage" and includes a radar chart showing various metrics or attributes related to this topic, with data points scattered around the circular chart.

In the bottom right corner is a headshot of a smiling middle-aged man with gray hair and a beard, wearing a dark suit jacket over a light colored shirt. He appears to be the presenter or an expert on the subject matter, but per your instructions I will not attempt to identify him by name.

The color scheme uses purple and white text on a dark gray background. The company or event logo "GIGAOM RADAR" is shown in the top left corner in white and blue.
The image is a slide from a presentation about cloud infrastructure and management. The slide specifically focuses on "Scale-out Storage" and includes a radar chart showing various metrics or attributes related to this topic, with data points scattered around the circular chart.

In the bottom right corner is a headshot of a smiling middle-aged man with gray hair and a beard, wearing a dark suit jacket over a light colored shirt. He appears to be the presenter or an expert on the subject matter, but per your instructions I will not attempt to identify him by name.

The color scheme uses purple and white text on a dark gray background. The company or event logo "GIGAOM RADAR" is shown in the top left corner in white and blue.
October 3, 2025

GigaOm Radar for Scale-Out Storage v6

Whit Walters

1.
Executive Summary

1. Executive Summary

Scale-out storage has evolved far beyond its original mandate of handling data growth. Today's platforms must simultaneously support AI workloads, defend against sophisticated cyberthreats, and operate seamlessly across hybrid multicloud environments. Organizations that once chose scale-out storage for capacity and performance now require these systems to serve as the foundation for their entire data strategy.

At its core, scale-out storage distributes data across multiple nodes that function as a unified system. As organizations add nodes, they gain both capacity and performance while maintaining resilience. In other words: if a node fails, operations continue. This architecture becomes essential when traditional scale-out systems reach their physical limits. Healthcare organizations managing imaging archives, financial institutions processing transaction data, and media companies handling 8K video workflows, all rely on scale-out storage to grow with their data requirements rather than being constrained by them.

For executives, scale-out storage has become a strategic business enabler. The convergence of AI initiatives and escalating ransomware threats has fundamentally changed the investment calculation. Organizations report average ransomware recovery times of 24 days with costs exceeding $5 million, while AI projects stall when data scientists lack rapid access to training datasets. Modern scale-out platforms address both challenges, but the stakes for choosing the right solution have never been higher, impacting everything from AI competitiveness to cyber resilience.

This sixth edition of our Radar reflects significant market maturation. Ransomware protection and flash memory optimizations have moved from differentiators to essential requirements. Therefore, solutions lacking either capability were excluded from consideration. We've also elevated NVMe-oF and NVMe/TCP from emerging to key features, recognizing that high-performance networking has become fundamental for demanding workloads. Looking ahead, we're tracking two emerging capabilities that signal the market's direction: integrated AI workload enablement and AI-driven cyber resilience orchestration. These features indicate that storage platforms are increasingly becoming intelligent data platforms.

This GigaOm Radar report examines 20 of the top scale-out storage solutions, comparing them against the detailed evaluation framework in our companion "Key Criteria for Evaluating Scale-Out Storage" report. Together, these reports offer a comprehensive market overview and provide the insights decision-makers need to select the right solution for their specific business requirements.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

2.
Market Categories and Deployment Types

2. Market Categories and Deployment Types

To help prospective customers find the best fit for their use case and business requirements, we assess how well scale-out storage solutions are designed to serve specific target markets and deployment models (Table 1).

For this report, we recognize the following market segments:

  • Small-to-medium business (SMB): Solutions for organizations with limited IT resources. Buyers prioritize cost-effectiveness, simple management, and the ability to scale without significant upfront investment or specialized skills.

  • Large Enterprise: Solutions for organizations requiring robust, feature-rich platforms. These buyers need to handle diverse applications and ensure data integrity, security, and compliance across the business.

  • High-performance: Solutions for organizations with computationally demanding workloads. These buyers focus on optimized performance for use cases like big data analytics, AI/ML, and HPC that require extremely low latency and high throughput.

In addition, we recognize the following deployment models:

  • Hardware appliance: A turnkey solution delivered as a self-contained, vendor-supported physical device. This model is ideal for organizations prioritizing ease of deployment and operational simplicity over deep customization.

  • Software-defined storage (SDS): A flexible software layer that can be deployed on commodity servers, either on-premises or in the cloud. This model gives organizations greater control, hardware choice, and is essential for building hybrid or multicloud storage infrastructures.

Table 1. Vendor Positioning: Target Market and Deployment Model

Vendor Positioning: Target Market and Deployment Model
TARGET MARKETDEPLOYMENT MODEL
SMB
Large Enterprise
High-Performance
Hardware Appliance
Software-Defined Storage
Cohesity
DDN
Dell Technologies
Hammerspace
Hitachi Vantara
HPE
IBM
NetApp
Nutanix
OSNexus
Pure Storage
Quantum
Qumulo
Quobyte
Scality
ThinkParQ
TrueNAS
VAST Data
VDURA
WEKA
Source: GigaOm 2026

Table 1 components are evaluated in a binary yes/no manner and do not factor into a vendor’s designation as a Leader, Challenger, or Entrant on the Radar chart (Figure 1). 

“Target market” reflects which use cases each solution is recommended for, not simply whether that group can use it. For example, if an SMB could use a solution but doing so would be cost-prohibitive, that solution would be rated “no” for SMBs.

3.
Decision Criteria Comparison

3. Decision Criteria Comparison

All solutions included in this Radar report meet the following table stakes—capabilities widely adopted and well implemented in the sector:

  • File protocols

  • Data services

  • Tiering

  • Secure operations

  • System management

  • Ransomware protection

  • Flash memory support

Tables 2, 3, and 4 summarize how each vendor in this research performs in the areas we consider differentiating and critical in this sector. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the relevant market space, and gauge the potential impact on the business.

  • Key features differentiate solutions, highlighting the primary criteria to be considered when evaluating a scale-out storage solution.

  • Emerging features show how well each vendor implements capabilities that are not yet mainstream but are expected to become more widespread and compelling within the next 12 to 18 months. 

  • Business criteria provide insight into the nonfunctional requirements that factor into a purchase decision and determine a solution’s impact on an organization.

These decision criteria are summarized below. More detailed descriptions can be found in the corresponding report, “GigaOm Key Criteria for Evaluating Scale-Out Storage Solutions.”

Key Features

  • Object storage integration: Object storage integration enables scale-out file storage systems to leverage the scalability and cost-effectiveness of object-based repositories. This feature allows organizations to seamlessly combine high-performance file access with economical object storage, optimizing data placement across different tiers based on access patterns and cost considerations.

  • Public cloud integration: Public cloud integration enables scale-out file storage systems to extend seamlessly into cloud environments, facilitating hybrid and multicloud architectures. This feature is crucial for organizations seeking to balance on-premises infrastructure with cloud flexibility, enabling consistent data access and management across diverse environments.

  • AI/ML-based analytics and management: AI/ML-based analytics and management in scale-out file storage systems leverage artificial intelligence and machine learning to provide advanced system insights and automate operations. This feature is crucial for optimizing system performance, predicting issues before they occur, and reducing administrative overhead in large-scale storage environments.

  • Data management: Data management in scale-out file storage systems encompasses comprehensive analytics, insights, and control over stored data, enabling efficient organization, access, and utilization of information assets. This feature is critical for optimizing storage resources, ensuring compliance, enhancing security, and deriving maximum value from stored data in large-scale enterprise environments.

  • Kubernetes support: Kubernetes support in scale-out file storage systems enables seamless integration with container orchestration platforms, allowing containerized applications to directly access and manage persistent storage. This feature is crucial for organizations adopting cloud-native architectures because it simplifies storage management for containerized workloads and enhances the flexibility and scalability of modern application deployments.

  • GPUDirect support: GPUDirect support in scale-out file storage systems enables direct data transfer between storage and GPU memory, bypassing CPU involvement. This feature is critical for high-performance computing, AI/ML workloads, and data-intensive simulations, as it significantly reduces latency and increases throughput for GPU-accelerated applications.

  • NVMe-oF and NVMe/TCP: These protocols extend the high-speed, low-latency benefits of NVMe storage across modern data center networks. This is a crucial key feature for maximizing performance, enabling storage systems to keep pace with the demands of AI/ML, HPC, and real-time analytics workloads.

Table 2. Key Features Comparison

Key Features Comparison
Exceptional
Superior
Capable
Limited
Poor
Not Applicable
KEY FEATURES
Average Score
Object Storage Integration
Public Cloud Integration
AI/ML-Based Analytics & Management
Data Management
Kubernetes Support
GPUDirect Support
NVMe-oF & NVMe/TCP
Cohesity
3.4
★★★★
★★★★★
★★★★★
★★★★★
★★★
★★
DDN
3.9
★★★★
★★★
★★★★
★★★
★★★★
★★★★★
★★★★
Dell Technologies
4.0
★★★★
★★★★
★★★★
★★★★★
★★★★★
★★★★★
Hammerspace
3.6
★★★★
★★★★
★★★★★
★★★★
★★★★
★★★
Hitachi Vantara
3.7
★★★★
★★★★
★★★★
★★★★
★★★★
★★
★★★★
HPE
4.4
★★★
★★★★★
★★★★★
★★★
★★★★★
★★★★★
★★★★★
IBM
4.1
★★★★★
★★★★
★★★
★★★
★★★★★
★★★★★
★★★★
NetApp
4.7
★★★★
★★★★★
★★★★★
★★★★
★★★★★
★★★★★
★★★★★
Nutanix
4.3
★★★★
★★★★
★★★★
★★★★
★★★★★
★★★★★
★★★★
OSNexus
2.4
★★★★
★★
★★★
★★★★
★★★★
Pure Storage
4.6
★★★★★
★★★★
★★★★★
★★★★
★★★★★
★★★★
★★★★★
Quantum
2.0
★★★
★★★★
★★
★★
★★★
Qumulo
2.4
★★★
★★★★
★★★
★★★
★★★★
Quobyte
2.9
★★★★
★★★
★★★★
★★★★
★★★
Scality
2.9
★★★★★
★★★★
★★
★★★
★★★★
★★
ThinkParQ
2.9
★★★★
★★
★★★
★★★★
★★★★
★★★
TrueNAS
2.3
★★★
★★★
★★
★★★
★★★★★
VAST Data
4.4
★★★★★
★★★
★★★★
★★★★
★★★★★
★★★★★
★★★★★
VDURA
2.0
★★★
★★★
★★
★★★
★★★
WEKA
4.1
★★★★★
★★★★
★★★
★★
★★★★★
★★★★★
★★★★★
Source: GigaOm 2026

Emerging Features

  • Composable infrastructure: Composable infrastructure in scale-out storage systems represents an emerging architecture that disaggregates storage, compute, and networking resources into flexible pools that can be dynamically composed and recomposed based on workload requirements. This approach is crucial for organizations seeking to maximize resource utilization, reduce infrastructure silos, and achieve cloud-like flexibility in their on-premises environments.

  • Integrated AI workload and data enablement: This feature represents the evolution of storage from a passive repository for AI data to an active platform that accelerates the entire AI data lifecycle. It involves native support for new AI-centric data types and optimized data pipelines.

  • AI-driven cyber resilience orchestration: This feature moves beyond traditional data protection to a proactive security model that uses AI for predictive threat detection and automated incident response. It is crucial for defending against sophisticated, next-generation cyberattacks.

Table 3. Emerging Features Comparison 

Emerging Features Comparison
Exceptional
Superior
Capable
Limited
Poor
Not Applicable
EMERGING FEATURES
Average Score
Composable Infrastructure
Integrated AI Workload and Data Enablement
AI-Driven Cyber Resilience Orchestration
Cohesity
2.7
★★★★
★★★★
DDN
3.7
★★★★
★★★★★
★★
Dell Technologies
3.3
★★
★★★★
★★★★
Hammerspace
3.7
★★★★★
★★★★★
Hitachi Vantara
2.7
★★★★
★★★★
HPE
4.3
★★★★★
★★★★
★★★★
IBM
3.0
★★
★★★★
★★★
NetApp
4.0
★★★★
★★★★
★★★★
Nutanix
3.7
★★★
★★★★
★★★★
OSNexus
1.3
★★★
Pure Storage
3.7
★★★
★★★★
★★★★
Quantum
1.0
★★
Qumulo
2.7
★★★★
★★
★★
Quobyte
0.7
★★
Scality
2.3
★★★
★★★★
ThinkParQ
1.0
★★★
TrueNAS
0.7
VAST Data
4.3
★★★★★
★★★★★
★★★
VDURA
1.0
★★
WEKA
3.0
★★★★
★★★★
Source: GigaOm 2026

Business Criteria

  • Flexibility: Flexibility in scale-out file storage systems refers to the ability to support diverse workloads, data types, and deployment models while adapting to changing business requirements. It is crucial for organizations seeking to maximize their storage investment across multiple use cases and evolving IT strategies, particularly in hybrid and multicloud environments.

  • Performance: Performance in scale-out file storage systems refers to the ability to handle concurrent access from multiple clients and applications while delivering low latency and high throughput. It is critical for organizations running data-intensive workloads such as big data analytics, HPC, and multi-user environments, where rapid data access and processing directly impact operational efficiency and user productivity.

  • Efficiency: Efficiency in scale-out file storage systems encompasses energy consumption, carbon footprint, and resource use optimization. It is crucial for organizations looking to reduce operational costs, meet sustainability goals, and comply with increasingly stringent environmental regulations while maintaining or improving storage performance.

  • Upgradability: Upgradability in scale-out file storage systems refers to the ability to seamlessly integrate new hardware and software components while maintaining operational continuity. It is critical for organizations seeking to extend their storage system's lifespan, improve return on investment (ROI), and reduce total cost of ownership (TCO) by avoiding disruptive and costly full-system migrations.

  • Ease of use: Ease of use in scale-out storage solutions encompasses intuitive management interfaces, automation capabilities, and intelligent system analytics. It is crucial for organizations seeking to minimize operational overhead, reduce human error, and efficiently manage large-scale storage environments across diverse deployments, including hybrid and multicloud scenarios.

  • Scalability: Scalability in scale-out file storage systems refers to the ability to expand capacity, performance, and functionality without disrupting operations or compromising efficiency. It is crucial for organizations to meet growing data demands, accommodate evolving workloads, and maintain cost-effectiveness as their storage needs increase.

  • Cost transparency: Cost transparency in scale-out file storage systems refers to the clarity, predictability, and comprehensibility of all financial aspects. This is crucial for organizations to accurately forecast TCO, calculate ROI, and make informed purchase decisions by understanding the exact upfront investment, including all software/hardware licenses, consulting fees, and potential additional licensing for features or support.

Table 4. Business Criteria Comparison

Business Criteria Comparison
Exceptional
Superior
Capable
Limited
Poor
Not Applicable
7
Average Score
Flexibility
Performance
Efficiency
Upgradability
Ease of Use
Scalability
Cost Transparency
Cohesity
3.6
★★★★
★★★
★★★
★★★★
★★★★
★★★★
★★★
DDN
4.0
★★★
★★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
Dell Technologies
3.9
★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
★★★
Hammerspace
4.3
★★★★★
★★★★
★★★★
★★★★★
★★★
★★★★★
★★★★
Hitachi Vantara
4.1
★★★★
★★★★
★★★★
★★★★★
★★★★
★★★★
★★★★
HPE
4.7
★★★★
★★★★★
★★★★
★★★★★
★★★★★
★★★★★
★★★★★
IBM
3.4
★★★★
★★★★★
★★
★★★
★★★
★★★★
★★★
NetApp
4.3
★★★★★
★★★
★★★★★
★★★★
★★★★★
★★★★
★★★★
Nutanix
4.3
★★★★★
★★★★
★★★★
★★★★
★★★★★
★★★★
★★★★
OSNexus
3.7
★★★★★
★★★
★★★
★★
★★★★
★★★★
★★★★★
Pure Storage
4.4
★★★★
★★★★
★★★★
★★★★★
★★★★★
★★★★
★★★★★
Quantum
3.6
★★★
★★★★
★★★★
★★★★
★★★
★★★★
★★★
Qumulo
4.0
★★★★★
★★★★
★★★
★★★★
★★★★★
★★★★
★★★
Quobyte
3.7
★★★
★★★★★
★★
★★★★
★★★★
★★★★
★★★★
Scality
3.7
★★★★
★★★★
★★★
★★★★★
★★★
★★★★
★★★
ThinkParQ
3.1
★★★
★★★
★★★
★★★
★★★
★★★★
★★★
TrueNAS
3.7
★★★★★
★★★
★★★
★★★
★★★★
★★★
★★★★★
VAST Data
4.7
★★★★
★★★★★
★★★★★
★★★★★
★★★★★
★★★★★
★★★★
VDURA
3.6
★★★★
★★★★
★★★
★★★★
★★★★
★★★
★★★
WEKA
4.0
★★★
★★★★★
★★★★
★★★★
★★★★
★★★★★
★★★
Source: GigaOm 2026

4.
GigaOm Radar

4. GigaOm Radar

The GigaOm Radar plots vendor solutions across a series of concentric rings with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrowhead that projects each solution’s evolution over the coming 12 to 18 months.

This image presents the GigaOm Radar, which assesses and compares various scale-out storage providers based on their maturity and innovation. The radar chart is divided into four quadrants: Leaders, Challengers, Entrants, and Outperformers.

Companies are positioned on the chart according to their maturity (emphasis on stability and continuity) and innovation (flexibility and responsiveness to the market). The concentric circles represent the feature play and platform play aspects, with feature play focusing on specific functionality and use case support, while platform play offers broader functionality and use case support.

Some of the notable companies featured on the radar include Dell Technologies, IBM, Hitachi Vantara, HPE, Nutanix, NetApp, Hammerspace, Pure Storage, VAST Data, DDN, WekaIO, Qumulo, Quobyte, TrueNAS, and VDURA. The image suggests that Dell Technologies and IBM are positioned as market leaders, demonstrating high levels of both maturity and innovation.

The radar provides a comprehensive overview of the scale-out storage market, allowing readers to compare and evaluate different providers based on their capabilities, market positioning, and overall strengths in terms of maturity and innovation.

Figure 1. GigaOm Radar for Scale-Out Storage

As you can see in Figure 1, the scale-out storage market has undergone a significant repositioning over the past year. The most striking change from our previous Radar is the large shift of vendors from the Maturity hemisphere into Innovation. This migration reflects a fundamental reality: architectures that provided stable, predictable growth just 12 months ago now require substantial reengineering to support AI workloads and meet new cyber resilience standards.

Equally notable is the consolidation toward Platform Play solutions. The right half of the chart now contains the vast majority of vendors, indicating that customers increasingly expect comprehensive data management capabilities rather than specialized point solutions. The empty Maturity/Feature Play quadrant reinforces this trend: where established specialists once thrived, the market has moved on. Organizations need either broad platform capabilities or rapid innovation cycles to remain competitive. Standing still with a narrow focus is no longer sustainable.

These shifts have created three distinct competitive clusters. The Maturity/Platform Play quadrant houses established vendors that have successfully integrated modern capabilities like ransomware protection and flash optimizations into their proven enterprise platforms. These vendors maintain their methodical approach while expanding their feature sets. The Innovation/Platform Play quadrant has become the most populated area of the chart, attracting vendors from across the spectrum. This concentration indicates that the market sees opportunity in building comprehensive platforms that can adapt quickly to emerging requirements. The Innovation/Feature Play quadrant contains performance specialists focused on specific high-value workloads, particularly those related to AI and HPC.

The distribution of Leaders across multiple quadrants demonstrates that success isn't confined to a single strategy. Vendors prioritizing product stability and those embracing rapid development cycles can both achieve market leadership if they execute well on their chosen approach. The prevalence of Outperformer designations confirms that the pace of development remains intense. Notably, we see no Forward Movers this year, as every vendor is maintaining at least market pace, with many exceeding it.

Several vendors positioned just outside the Leaders circle suggest the competitive landscape remains fluid. These Challengers have strong capabilities but need to address specific gaps to achieve Leader status. Their proximity to the center indicates these gaps are addressable, setting up potential position changes in next year's evaluation.

Year-over-year changes reflect broader market dynamics. New vendors have arrived with AI-first architectures, while some traditional vendors have either consolidated through acquisition or exited the market. The vendors we tracked last year have largely progressed as anticipated, though the acceleration toward Platform Play happened faster than expected. Those who invested early in flash optimization and ransomware protection have benefited, while vendors slow to adapt these capabilities have lost ground.

Looking ahead, the next 12 to 18 months will likely see continued Innovation hemisphere growth as vendors race to integrate capabilities like vector databases for retrieval-augmented generation (RAG) and automated recovery orchestration. The market's trajectory is clearly toward intelligent, self-managing platforms that treat storage as one component of a broader data intelligence strategy.

In reviewing solutions, it’s important to keep in mind that there are no universal “best” or “worst” offerings; every solution has aspects that might make it a better or worse fit for specific customer requirements. Prospective customers should consider their current and future needs when comparing solutions and vendor roadmaps.

INSIDE THE GIGAOM RADAR

To create the GigaOm Radar graphic, key features, emerging features, and business criteria are scored and weighted. Key features and business criteria receive the highest weighting and have the most impact on vendor positioning on the Radar graphic. Emerging features receive a lower weighting and have a lower impact on vendor positioning on the Radar graphic. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and roadmaps.

Note that the Radar is technology-focused, and business considerations such as vendor market share, customer share, spend, recency or longevity in the market, and so on are not considered in our evaluations. As such, these factors do not impact scoring and positioning on the Radar graphic.

For more information, please visit our Methodology.

5.
Solution Insights

5. Solution Insights

Cohesity: Cohesity Data Cloud Platform*

Solution Overview
Cohesity provides a data security and management platform focused on consolidating infrastructure for data protection, management, and analytics across hybrid and multicloud environments. The solution is centered on the Cohesity Data Cloud Platform, a software-defined web-scale architecture managed globally by Cohesity Helios, a SaaS-based control plane that provides AI-powered insights. This general, platform-centric strategy is designed to serve a wide range of enterprise use cases. The solution will look and feel different over the contract lifecycle. Cohesity delivers an aggressive roadmap and is flexible and responsive to market demands, valuing rapid advancement and frequent updates. This is demonstrated by its recent focus on developing new capabilities in AI-driven data intelligence and cyber resilience orchestration.

Cohesity is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Cohesity scored well on a number of decision criteria, including:

  • Public cloud integration: The platform demonstrates deep, native integration with the major public cloud providers (AWS, Microsoft Azure, and Google Cloud) enabling seamless data mobility, tiering, archival, and disaster recovery use cases.

  • AI/ML-based analytics and management: Cohesity provides strong centralized management and analytical capabilities through its Helios management plane, offering global visibility, AI-powered insights for operational efficiency, and proactive ransomware threat detection across the entire data estate.

  • Data management: The solution offers a robust set of data management features that go beyond simple backup, including immutable snapshots, data classification, global search, and policy-based automation, which are critical for meeting security and compliance requirements.

Opportunities
Cohesity has room for improvement in a few decision criteria, including:

  • Kubernetes support: The solution could improve its support for cloud-native workloads by offering more granular, application-aware protection and recovery capabilities specifically tailored to the complexities of Kubernetes environments.

  • GPUDirect support: There is an opportunity to enhance support for AI/ML workloads by integrating with technologies like NVIDIA GPUDirect Storage, which would accelerate data pipelines and reduce latency for model training and inference.

  • NVMe-oF and NVMe/TCP: While performant, the platform could further improve its capabilities for latency-sensitive applications by expanding its support for modern, high-throughput storage protocols like NVMe-over-Fabrics and NVMe/TCP.

Purchase Considerations
Cohesity is licensed primarily via a capacity-based subscription model. While this approach offers predictability, the full platform capabilities are productized across various SKUs and tiers, which can add complexity for customers trying to align features with specific requirements. As a comprehensive Platform Play, the solution is designed to consolidate and replace multiple legacy point products, making it a strategic purchase for medium to large enterprises seeking to modernize their data management infrastructure. Deployment is flexible, offered as a self-managed software solution on certified hardware, a cluster of Cohesity nodes, or as a backup-as-a-service (BaaS) offering managed by Cohesity. Due to its platform nature, deployment and migration from incumbent solutions often benefit from professional services to ensure a smooth transition and full realization of the platform’s value.

Use Cases
As a Platform Play vendor, Cohesity supports a wide range of use cases across nearly all industry verticals, with a strong focus on financial services, healthcare, and the public sector. Its architecture is well suited for organizations looking to consolidate backup and recovery, disaster recovery, and long-term retention. More importantly, it is increasingly adopted for modern use cases, including robust cyber resilience against ransomware, data governance and e-discovery, and providing data for analytics and AI/ML pipelines, positioning it as a central component of an enterprise data strategy.

DDN: Infinia

Solution Overview
DDN is a provider of high-performance data storage solutions, specializing in systems for demanding AI and HPC workloads. The company's latest offering in this space is DDN Infinia, a next-generation, software-defined data platform engineered for modern AI use cases such as inference, RAG, and large-scale analytics.

Infinia is a single product within the larger DDN portfolio and is designed as a unified, high-performance architecture for managing massive volumes of unstructured data across edge, core, and cloud locations. The solution's key components include the Infinia Object storage engine, Infinia Data Services for governance and observability, the Infinia Core Engine for scalability and data placement, and Infinia Cloud Integration for hybrid cloud operations. Infinia’s general, platform-centric approach is focused on holistically addressing the AI data lifecycle. The solution will look and feel different over the contract lifecycle. DDN delivers an aggressive roadmap, values rapid advancement with monthly code releases, and is responsive to the market with its new Infinia platform, which only became generally available in February 2025.

DDN is positioned as a Leader and Fast Mover in the Innovation/Feature Play quadrant of the scale-out storage Radar chart.

Strengths
DDN scored well on a number of decision criteria, including:

  • GPUDirect support: The solution offers robust support for GPU-based workloads through deep integrations with the NVIDIA ecosystem, including certified support for NVIDIA DGX AI Factory, NVIDIA RAG and NIM Inference Microservices, and NVIDIA NeMo. All protocols also support remote direct memory access (RDMA), which is foundational for direct data transfer between storage and GPU memory, enabling accelerated performance for AI and HPC applications.

  • Kubernetes support: Infinia provides advanced support for containerized environments, featuring a native CSI driver for dynamically provisioning block-based persistent volumes. It also offers advanced, application-aware data services, including an event-driven pipeline (REDQueue) that allows containerized microservices to react to object changes, automating complex AI workflows within Kubernetes.

  • AI/ML-based analytics and management: Infinia delivers exceptional analytics and management capabilities through a unified interface that includes a powerful AIOps framework. The platform provides predictive hardware fault detection, automated dataset rebalancing, and deep telemetry with over 1,000 metrics per node that can be exported to external platforms like Datadog and Elastic for comprehensive observability.

Opportunities
DDN has room for improvement in a few decision criteria, including:

  • Public cloud integration: While Infinia offers a fully supported software-defined storage (SDS) deployment through the Google Cloud Marketplace and a validated proof-of-concept on Oracle Cloud Infrastructure (OCI), it could improve by expanding its native fully supported deployments to other major cloud providers, like AWS and Azure, to provide broader multicloud flexibility.

  • Data management: The platform provides a comprehensive suite of data management features, including advanced, policy-based data placement and inline compression. It could be improved by adding native deduplication capabilities, which are currently on the roadmap, to further enhance storage efficiency across all workloads.

  • NVMe-oF and NVMe/TCP: Infinia provides NVMe-oF over TCP for block-based persistence through its Kubernetes CSI driver. To improve, DDN could expand this capability beyond its current implementation, making it a more core, defining, and broadly applicable feature of the entire architecture rather than one positioned primarily for a specific Kubernetes use case.

Purchase Considerations
Infinia is available through an all-inclusive, capacity-driven subscription model that can be consumed as a storage-as-a-service (STaaS) offering, which simplifies budgeting, as the cost per terabyte decreases with scale. As a Feature Play solution, Infinia is designed to be a comprehensive data platform for AI workloads, which may require the displacement of incumbent systems to realize its full benefits. Its focus on exabyte-scale performance and large enterprise customers like xAI and NVIDIA makes it best suited for large organizations rather than SMBs. DDN offers flexible deployment options, including software-only on certified commodity hardware, as a preconfigured appliance from DDN or OEMs, or directly in the public cloud. The solution is notably easy to deploy for its target market, with claims of deploying a 120-node cluster in under 10 minutes. However, organizations new to large-scale AI infrastructure may require professional services to optimize for specific data pipelines.

Use Cases
As a Feature Play vendor, DDN engineered Infinia for a specific set of high-value use cases centered around the most demanding AI and data-intensive analytics workloads. Horizontally, it is built to support AI inference and RAG, data preparation for model training, and advanced analytics.

While its technology can apply to many sectors, the platform specifically targets verticals with extreme performance requirements. These include financial services for real-time fraud detection and quantitative analysis, healthcare and life sciences for drug discovery, and the public sector for sovereign AI and weather modeling. Other key target industries include automotive for autonomous driving data analysis, manufacturing for AI factories, and energy for high-performance modeling.

Dell Technologies: PowerScale 

Solution Overview
Dell Technologies provides enterprise-grade, scale-out storage through its PowerScale platform, which is engineered on the mature OneFS operating system. The solution is a unified file and object platform that can be deployed as a physical appliance (in all-flash, hybrid, or archive configurations) or as software-defined storage in the public cloud. PowerScale is a core component of the broader Dell Technologies portfolio and serves as the foundational storage for the Dell AI Data Platform. Dell Technologies pursues a comprehensive, platform-centric strategy designed to support a vast range of enterprise workloads, from demanding, data-intensive AI and analytics to large-scale archives.

Dell Technologies' approach prioritizes stability and continuity, and the solution will look and feel largely the same over the contract lifecycle. The vendor focuses on methodical, incremental improvements to its platform, enhancing performance, data services, and security. Recent advancements exemplify this structured evolution, including support for denser flash media, higher-speed networking, and new software features like MetadataIQ for accelerated data indexing. This strategy ensures a consistent user experience and assured compatibility while steadily increasing the platform's capabilities.

Dell Technologies is positioned as a Leader and Outperformer in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths|
Dell Technologies scored well on a number of decision criteria, including:

  • Data management: PowerScale offers a rich suite of data management capabilities, including policy-based automation for tiering (SmartPools) and replication (SyncIQ and SmartSync). The platform enhances AI and analytics workflows with MetadataIQ for rapid metadata querying and an open source document loader optimized for RAG workloads. Its extensive ecosystem includes integrations with over 250 ISVs, ensuring broad interoperability.

  • Kubernetes support: Dell Technologies provides comprehensive support for containerized environments through its CSI driver. The driver supports advanced features for multicluster deployments, including volume cloning, quotas, and snapshots. Further integration with automation tools like Ansible, Terraform, and VMware vRealize Orchestrator simplifies storage management in modern application development pipelines.

  • GPUDirect support: The platform is highly optimized for GPU-centric workloads, offering support for GPUDirect Storage over RDMA (RoCEv2) to accelerate AI and HPC applications. Dell Technologies has achieved numerous certifications with NVIDIA, including for DGX SuperPOD and BasePOD, and was one of the first storage partners to certify an all-Ethernet solution with NVIDIA for high-performance AI environments.

Dell Technologies was classified as an Outperformer given its high rate of development over the last year and its strong roadmap. The platform saw significant updates, including launching the industry's densest flash drives, doubling networking speed, and releasing a new multipath client driver. Forthcoming releases like Project Lightning, a parallel file system for large-scale AI, and the PowerScale Cybersecurity Suite signal continued momentum.

Opportunities
Dell Technologies has room for improvement in a few decision criteria, including:

  • NVMe-oF and NVMe/TCP: While PowerScale supports NFS over RDMA for high-performance client access, its broader, client-facing NVMe/TCP implementation could be enhanced to match the offerings of some competitors who lead in this area. Expanding native support would provide customers with more options for high-speed, low-latency connectivity across standard Ethernet networks.

  • Composable infrastructure: PowerScale delivers composability benefits through its flexible architecture, which allows for mixing node types and scaling compute resources independently with accelerator nodes. However, the opportunity exists to evolve toward a more dynamic, API-driven model that allows for the true disaggregation and on-demand composition of storage, compute, and networking resource pools to meet specific workload requirements.

  • AI/ML-based analytics and management: While Dell offers a powerful suite of tools—including Dell AIOps for predictive health, InsightIQ for deep performance analysis, and MetadataIQ for data indexing—the user experience could be streamlined. An opportunity exists to unify these distinct tools into a single, cohesive management plane to provide a more seamless administrative experience. Furthermore, expanding the generative AI capabilities of the AIOps Assistant from generating insights to performing automated, proactive remediation actions would significantly enhance operational efficiency.

Purchase Considerations
Dell Technologies offers PowerScale through a flexible and transparent Dell APEX subscription model, which provides a utility-based pricing structure for hardware, software, and support under a single agreement. This all-inclusive approach simplifies budgeting and reduces licensing complexity. As a Platform Play solution, PowerScale is designed to be a comprehensive data management platform for large enterprises, which may require the displacement of incumbent point solutions to realize its full value.

Deployment complexity is mitigated by a suite of tools and features aimed at simplifying operations. Nondisruptive upgrades and the integrated Pre-Upgrade HealthCheck (PUHC) framework streamline maintenance, while Dell AIOps provides predictive analytics and proactive management to reduce administrative overhead. Dell's robust ProSupport and Lifecycle Extension services provide long-term value and support for enterprise customers. Data migration is facilitated by tools like SyncIQ and the platform's support for nondisruptive, data-in-place upgrades.

Use Cases
As a Platform Play vendor, Dell PowerScale is designed to support a broad array of industry verticals and nearly all unstructured data use cases. The solution excels in data-intensive sectors such as healthcare and life sciences for medical imaging and genomics; financial services for tick data analysis and fraud detection; manufacturing for design, simulation, and digital twin workloads; and media and entertainment for high-resolution production pipelines. It is also heavily utilized in the public sector (SLED) for everything from public safety systems to university research archives. Its scalability and performance make it ideal for modern AI and analytics workloads, large-scale content repositories, and enterprise applications.

Hammerspace: Hammerspace Data Platform

Solution Overview
Hammerspace provides a software-defined unstructured data storage and orchestration system. The platform creates a high-performance Parallel Global File System that unifies file and object access across different storage types from any vendor, spanning geographic locations, and public and private clouds. The solution is composed of Anvil nodes for metadata services and DSX nodes for data services, which can be deployed on bare metal servers, VMs, or cloud instances. Hammerspace separates the metadata control plane from the data plane, enabling automated, nondisruptive data orchestration based on policy-driven objectives across disparate storage systems. This standards-based approach allows users and applications to access data via standard SMB, NFS, and S3 protocols without proprietary client software.

Hammerspace’s strategy is a platform-centric approach focused on decentralized environments where data spans multiple storage silos and locations, which is particularly relevant for AI, HPC, and hybrid cloud use cases. The solution will look and feel different over the contract lifecycle. Hammerspace delivers an aggressive roadmap and is flexible and responsive to market demands, valuing rapid advancement and frequent updates. The vendor is focused on emerging features and has a high rate of development, including over 2,400 contributions to the Linux kernel to enhance pNFS capabilities. Recent innovations include the Hyperscale NAS architecture, support for tape storage, an S3 client-side interface, and the ability to turn server-local NVMe into a shared "Tier 0" of storage.

Hammerspace is positioned as a Leader and Outperformer in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Hammerspace scored well on a number of decision criteria, including:

  • Data management: The solution provides comprehensive, policy-driven data lifecycle management through its metadata control plane that manages data services across various storage platforms. The platform allows for automated, policy-based data orchestration, data placement, and consistent management of snapshots, replication, and data lifecycle in multisite environments.

  • Object storage integration: Hammerspace offers native object storage capabilities with S3 compatibility and strong integration with both on-premises and cloud object storage providers. This enables features like automated tiering and lifecycle management for multiple object storage targets and provides a unified namespace for both file and object access.

  • GPUDirect support: The solution is certified for NVIDIA GPUDirect Storage (GDS) and supports RDMA over Converged Ethernet (RoCE), enabling direct data transfer between storage and GPU memory. It provides optimized data paths specifically for GDS workloads, which is critical for high-performance AI and HPC applications.

Hammerspace has earned an Outperformer designation due to its high rate of development in the last year, high release cadence, and strong roadmap for the coming year, as demonstrated by recent innovations such as its Hyperscale NAS architecture and enhanced S3 interface capabilities.

Opportunities
Hammerspace has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: Hammerspace could improve its support for this criterion by moving beyond basic historical reporting and alerts. While the platform provides real-time telemetry and allows integration with external tools, developing native AI/ML capabilities for trend analysis, capacity forecasting, and providing AI-driven recommendations for performance tuning and cost optimization would enhance its offering.

  • AI-driven cyber resilience orchestration: The solution could be improved by incorporating more advanced, proactive security features. While Hammerspace provides robust data protection tools like immutable snapshots and WORM, it does not currently use AI/ML to predict threats. Developing fully automated response orchestration and deep, bidirectional integration with enterprise SIEM/SOAR platforms would strengthen its security posture.

  • NVMe-oF and NVMe/TCP: While Hammerspace supports NFS-RDMA, it could enhance its architecture to more broadly deliver the low-latency benefits of NVMe-oF across multiple fabric types. The current approach eliminates the need for a separate storage network, but improving and optimizing for multiple NVMe-oF fabrics would align it with leading implementations. 

Purchase Considerations
Hammerspace utilizes a transparent, subscription-based licensing model based on the capacity of data under management, which includes all features, support, and maintenance. This simplifies budgeting, though specific pricing is not public. As a Platform Play solution, Hammerspace is designed to be a comprehensive data orchestration layer, often requiring greenfield deployments or the displacement of incumbent data management tools to realize its full potential. While Hammerspace is ideal for large enterprises looking to consolidate complex, distributed data environments, it is also a strong fit for organizations of any size needing a high-performance parallel file system. For demanding use cases like AI/ML and VFX, its standards-based approach offers a compelling alternative to traditional HPC file systems, often with significantly less management overhead.

While the system is designed to be managed by enterprise IT teams, optional professional services are available for complex installations or integrations. Deployment complexity is mitigated by its software-defined nature, allowing it to run on certified commodity hardware, VMs, or in the cloud. Migration from existing systems is significantly simplified by Hammerspace's unique data-in-place assimilation capability, which brings existing storage into the global namespace with near-zero downtime, avoiding lengthy data copy operations.

Use Cases
As a Platform Play vendor, Hammerspace supports a broad range of use cases and is particularly well suited for organizations with decentralized environments where data spans multiple storage silos, data centers, and clouds. Its Hyperscale NAS capabilities and high-performance parallel global file system make it ideal for data-intensive workloads in AI/ML, HPC, and scientific research. The platform's strong data orchestration and multisite features also support distributed workforce collaboration, cloud bursting for compute, and hybrid and multicloud strategies for industries like media and entertainment, life sciences, and financial services. 

Hitachi Vantara: Virtual Storage Platform One (VSP One)

Solution Overview
Hitachi Vantara is the modern infrastructure, data management, and digital solutions subsidiary of Hitachi, Ltd. The company is executing a significant strategic pivot, consolidating its historically separate block, file, and object storage portfolios into a single, unified hybrid cloud data platform named Virtual Storage Platform One (VSP One). This platform-centric approach aims to eliminate data silos and simplify management across complex, distributed environments spanning on-premises data centers, the edge, and public clouds. The VSP One portfolio is a suite of integrated solutions, including Block for mission-critical applications, File for high-performance workloads, Object for massive scalability and data governance, and SDS for software-defined and cloud deployments. The solution will look largely the same over the contract lifecycle even though the vendor delivers an aggressive roadmap focused on unifying its data fabric and expanding cloud integrations, which may invite disruption but also demonstrates a flexible and responsive approach to the market.  

Hitachi Vantara is positioned as a Leader and Outperformer in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Hitachi Vantara scored well on a number of decision criteria, including:

  • Data management: The platform inherits a rich legacy of enterprise-grade data management from various mature Hitachi technologies, including block, file, and object solutions. This provides a comprehensive suite of features for data governance, compliance, and security that is critical for large enterprises. The solution offers advanced capabilities such as certified multitenancy, immutable object locking for cyber resiliency, and robust, policy-based automation for data lifecycle management, making it highly suitable for heavily regulated industries that require auditable data control.  

  • Object storage integration: The solution’s object storage is a central pillar of the VSP One architecture, not merely an add-on feature. It serves as a highly scalable, efficient, and resilient backend for a wide range of use cases, from active archiving to forming the foundation of data lakes for AI and analytics. Its deep integration with VSP One File for automated tiering and its designated role as a repository for AI pipelines underscore its importance in the unified data fabric, providing a seamless bridge between high-performance and capacity-oriented storage tiers.  

  • Public cloud integration: The VSP One platform extends natively into major public clouds as well as private and neoclouds, offering a full-featured version of its software with a unified control plane. This allows for consistent data services, policy management, and seamless bidirectional data mobility across the entire hybrid and multicloud landscape. This integration is central to the platform’s strategy of creating a unified data fabric that spans from on-premises to cloud environments.

Hitachi Vantara was classified as an Outperformer given its aggressive development cadence over the last 12 to 18 months, evidenced by the rapid consolidation of its portfolio under the VSP One brand, the expansion of its services to public clouds like AWS, Azure, and Google Cloud, and a strong, forward-looking roadmap focused on unifying its data fabric and deepening its AI integrations.

Opportunities
Hitachi Vantara has room for improvement in a few decision criteria, including:

  • GPUDirect support: The technical capability is a clear strength, reinforced by detailed reference architectures within the Hitachi iQ portfolio. The opportunity now is to build upon this technical foundation by creating more prescriptive, easy-to-consume deployment assets. Developing solution briefs, best practice guides, and automated deployment scripts derived from these architectures would further simplify the customer experience and accelerate the adoption of this powerful feature across a wider range of AI-driven workloads. 

  • Composable infrastructure: While Hitachi Vantara views the VSP One platform's ability to scale storage and compute resources independently as a form of composability, there is an opportunity to align more closely with the broader market definition. The vendor could strengthen its strategic position by articulating a clearer roadmap for true composable infrastructure (defined as the dynamic, API-driven aggregation and disaggregation of independent physical compute, storage, and networking resource pools). Clarifying this strategy would better align VSP One with future data center trends and differentiate it from its current highly flexible software-defined architecture. 

  • AI/ML-based analytics and management: While VSP 360 provides a strong foundation for unified management with capabilities for configuration, observability, and data governance, the opportunity remains to deliver a truly seamless user experience across the entire VSP One platform. The goal should be to completely abstract the underlying complexities of the distinct block, file, and object components. Achieving a single pane of glass for provisioning, security, monitoring, and policy enforcement across all data types would fully deliver on the VSP One vision and significantly simplify hybrid cloud operations for administrators.

Purchase Considerations
Hitachi Vantara is heavily promoting flexible consumption models through its Hitachi EverFlex program. This program moves infrastructure spend from CapEx to OpEx through pay-per-use consumption, offering several service tiers that range from customer-managed infrastructure to a fully Hitachi-managed service. A key differentiator is EverFlex's ability to provide heterogeneous management, allowing it to extend its VSP 360 control plane to manage third-party infrastructure alongside Hitachi VSP One storage.

The VSP One platform is an unequivocal Platform Play, designed to be a foundational data infrastructure that consolidates block, file, and object storage. While it can be deployed for specific projects, its full value and economic benefits are best realized when an organization makes a strategic commitment to use it for consolidating and displacing incumbent, siloed solutions over time. The company is focused on simplifying deployment, with claims of getting systems running in 30 minutes. However, the breadth of the full platform and its advanced features can introduce complexity. The platform includes mature, nondisruptive migration tools within the VSP 360 suite to ease the transition from legacy systems.

Use Cases
As a Platform Play, VSP One is designed to support a wide array of enterprise use cases across nearly all industry verticals, with particular strengths in demanding sectors like financial services, government, and life sciences. The solution is well suited for large-scale, data-intensive, and mission-critical workloads. Key use cases include AI, machine learning, and high-performance analytics, which leverage the platform’s architectural support for GPUDirect with its scalable backend data services. It is also ideal for building hybrid cloud data fabrics, enabling seamless data mobility and management between on-premises and public cloud environments. Other primary use cases include large-scale workload consolidation, data protection and cyber resiliency using its advanced replication and security features, and active archiving for long-term compliance and governance.

HPE: HPE GreenLake for File Storage

Solution Overview
HPE is a global, edge-to-cloud company with a primary focus on delivering data-first modernization through its HPE GreenLake as-a-service platform. The vendor’s scale-out storage solution, HPE GreenLake for File Storage, is a software-defined offering built on the HPE Alletra Storage MP hardware platform. It features a disaggregated, shared-nothing architecture designed to scale performance and capacity independently for demanding, enterprise-scale workloads.

HPE employs a general, platform-centric strategy, targeting a broad spectrum of enterprise use cases from the edge to the cloud. The solution's position in the Maturity hemisphere of the Radar reflects HPE’s methodical and structured approach; the vendor prioritizes stability, continuity, and a consistent user experience. 

HPE is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
HPE scored well on a number of decision criteria, including:

  • Public cloud integration: The solution’s foundation within the HPE GreenLake platform provides a unified, seamless hybrid cloud experience. This allows organizations to manage their file data across on-premises and public cloud environments with consistent operations and data mobility, which is critical for modern data strategies.

  • Kubernetes support: HPE provides robust integration for containerized workloads through its HPE CSI Driver for Kubernetes. This enables stateful applications to leverage enterprise-grade storage features, simplifying persistent storage management for DevOps and cloud-native application environments.

  • GPUDirect support: The platform offers strong support for NVIDIA GPUDirect Storage, enabling a direct data path between HPE Alletra storage and NVIDIA GPUs. This capability is crucial for accelerating AI/ML and HPC workloads by reducing latency and freeing up CPU resources, leading to faster model training and data analysis.

Opportunities
HPE has room for improvement in a few decision criteria, including:

  • Object storage integration: HPE could enhance its support for this criterion by providing more deeply integrated, native object storage access. While object storage is available within the HPE portfolio, a more seamless multiprotocol architecture allowing simultaneous file and S3 access to the same global data fabric would improve flexibility for data lake and analytics use cases.

  • Data management: The platform could be improved by incorporating more advanced, AI-driven data management and analytics capabilities natively within the solution. Enhancing the built-in tools for data classification, lifecycle management, and intelligent data placement would reduce operational complexity and provide deeper insights without relying on adjacent toolsets.

  • AI-driven cyber resilience orchestration: While HPE offers strong ransomware protection and recovery features, it could improve by developing a more advanced AI-powered orchestration layer. Moving beyond anomaly detection to fully automated recovery workflows, including automated damage assessment and orchestrated failover, would further minimize manual intervention and accelerate recovery times in the event of a sophisticated attack.

Purchase Considerations
HPE GreenLake for File Storage is delivered through a cloud operational model with capacity-based subscription licensing. This as-a-service approach provides transparent, predictable pricing that simplifies budgeting and aligns costs with usage, which is ideal for large enterprises seeking to move from CapEx to OpEx models. The solution is effectively productized as a comprehensive platform play, designed to be a central component of an organization's data strategy, and this may require the displacement of incumbent solutions to realize its full value.

As a managed service, deployment complexity is significantly reduced for the end user, with HPE handling the infrastructure management. HPE provides extensive professional services and support resources, which are integral to the GreenLake model. Migration from legacy HPE systems is typically well supported, while transitioning from third-party solutions involves a standard enterprise migration project, for which HPE offers services and tools.

Use Cases
As a Platform Play vendor, HPE GreenLake for File Storage supports a wide range of use cases across most industry verticals, particularly those with demanding performance and scale requirements. The solution is highly effective for emerging workloads like AI/ML model training and GPU-accelerated computing, where its performance and GPUDirect support are critical. It also excels in building large-scale data lakes for analytics, supporting healthcare imaging (PACS), and handling high-throughput media and entertainment workflows. Additionally, it is a strong fit for general-purpose enterprise IT consolidation, serving as a centralized platform for various unstructured data needs.

IBM: Storage Scale*

Solution Overview
IBM Storage Scale is a software-defined storage (SDS) solution built on the mature and proven IBM General Parallel File System (GPFS), a high-performance parallel file system architected for the most demanding data-intensive workloads. The solution is available as a hardware appliance or as SDS deployable on-premises or in major public clouds. Its flexible architecture supports a wide range of storage media, from high-speed NVMe flash to cost-effective object storage, all managed under a single global namespace with policy-driven data placement. With broad multiprotocol support, including NFS, SMB, S3, and POSIX, Storage Scale can serve a diverse set of traditional and modern applications from a unified data pool.  

IBM’s strategy for Storage Scale is rooted in its maturity. The solution will look and feel largely the same over the contract lifecycle, as IBM prioritizes stability, consistent user experience, and assured compatibility over breakneck advancement. This methodical approach is evident in its capabilities in established technologies like high-performance networking and data access protocols. While this focus on stability means innovation on emerging features can be more incremental or delivered via integrated portfolio products, it provides a powerful and reliable foundation for mission-critical enterprise AI and HPC workloads.

IBM is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
IBM scored well on a number of decision criteria, including:

  • Object storage integration: The platform’s integration with object storage is exceptional, providing true unified file and object access to the same data instance within a single, global namespace. This architecture moves far beyond simple tiering; it eliminates data silos and enables in-place analytics, allowing modern, S3-native applications to work on the same data as traditional file-based applications without costly and slow data movement. This represents a best-in-class implementation.  

  • Kubernetes support: IBM provides exceptional, enterprise-grade support for containerized environments through a comprehensive CSI driver. The driver delivers a full suite of data services directly to Kubernetes, including dynamic provisioning, volume expansion, snapshots, and cloning. This deep integration makes Storage Scale a first-class storage platform for stateful, cloud-native applications, bridging the gap between DevOps and enterprise storage.  

  • GPUDirect support: The solution’s support for NVIDIA GPUDirect Storage is exceptional, featuring a direct, zero-copy data path between storage and GPU memory over RDMA. This is a core architectural feature designed to maximize throughput and minimize latency, making it a premier choice for organizations running large-scale, GPU-accelerated AI training and HPC simulation workloads where data access speed is a critical bottleneck.  

Opportunities
IBM has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: The platform has room to improve its native AIOps capabilities. While the integrated IBM Storage Insights Pro platform provides powerful predictive analytics and capacity forecasting, these advanced AI/ML capabilities are delivered through a separate, add-on product rather than being native to Storage Scale itself. Deeper native integration would simplify the management experience and lower the barrier to adopting advanced operational intelligence.  

  • Data management: The platform’s robust policy engine, when combined with the separate IBM Spectrum Discover product, provides superior data management, including deep file analytics and automated classification. The opportunity for improvement lies in making this a truly unified solution, as the most advanced AI-driven governance capabilities are not native to the core Storage Scale product. Integrating these features would create a more seamless experience and simplify the adoption of advanced data lifecycle management. 

  • AI-driven cyber resilience orchestration: The platform’s strategy for cyber resilience is capable, leveraging AI for anomaly detection through integrated platforms like IBM Storage Defender. However, this intelligence is not native to the core Storage Scale product, and a fully automated response orchestration capability is still an emerging part of the overall vision. Building these features directly into the platform would create a more seamless and powerful security posture.

Purchase Considerations
The decision to invest in IBM Storage Scale is a strategic commitment to the broader IBM storage ecosystem, not just a single product purchase. The licensing model can be complex, requiring navigation of multiple software editions and the potential need to license separate products like IBM Storage Insights and Storage Defender to achieve the full suite of AIOps and advanced security capabilities. This can make forecasting a comprehensive TCO challenging without direct vendor engagement.  

The solution is a definitive Platform Play designed for large enterprise and high-performance environments. It is not targeted at the SMB market. Its operational management reflects its powerful but complex HPC heritage. While the graphical interface simplifies many tasks, the system's overall ease of use and upgradability can be challenging for teams without specialized skills in parallel file systems, potentially requiring investment in professional services and training. Organizations should evaluate not only the platform's features but also their internal team's capacity to manage a system of this complexity.

Use Cases
As a Platform Play vendor, IBM Storage Scale supports a wide array of use cases across industries like financial services, life sciences, research, and media and entertainment. Its primary value lies in consolidating high-value, performance-sensitive, and often disparate workloads onto a single, unified data platform. It is not intended to be a simple replacement for general-purpose file servers.  

The ideal use case is for an organization seeking to eliminate data silos between its traditional HPC or research departments and its modern data science and application development teams. The platform’s unique ability to excel at serving legacy HPC applications via its parallel file system, modern AI training pipelines via GPUDirect, and cloud-native analytics tools via S3 and a Kubernetes CSI driver (all from the same data copy) makes it a powerful foundation for building a high-performance data lakehouse and accelerating innovation.

NetApp: ONTAP

Solution Overview
NetApp provides a comprehensive scale-out storage portfolio centered on its ONTAP operating system, which serves as a unified data plane across on-premises and cloud environments. The platform is managed through BlueXP, a SaaS-delivered hybrid control plane designed to simplify operations and provide AIOps-driven insights. The hardware portfolio is segmented to address a full spectrum of enterprise workloads, including the all-flash, performance-optimized AFF A-Series; the capacity-optimized, QLC-based AFF C-Series for unstructured data; and the block-storage-specific ASA systems. NetApp’s strategy is a quintessential platform play, aiming to provide a consistent, all-encompassing data management foundation for large enterprises.

As a mature platform, the solution will look and feel largely the same over the contract lifecycle. NetApp prioritizes stability and continuity, valuing incremental improvement and assured compatibility over breakneck advancement. This methodical approach is evident in recent enhancements that improve existing capabilities, such as the introduction of new hardware with integrated offload engines and third-party validation of its AI-driven autonomous ransomware protection, which demonstrated 99% detection accuracy.

NetApp is positioned as a Leader and Outperformer in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
NetApp scored well on a number of decision criteria, including:

  • Public cloud integration: The platform’s integration with public clouds is exceptional, merited by a unique strategy of delivering first-party, native storage services within all major hyperscalers. Offerings like Amazon FSx for NetApp ONTAP, Azure NetApp Files, and Google Cloud NetApp Volumes are co-engineered, sold, and supported directly by the cloud providers. This deep integration provides superior performance and reliability compared to typical marketplace appliances and de-risks cloud migration by allowing enterprises to move mission-critical applications without refactoring code, thereby accelerating cloud adoption and reducing TCO.

  • AI/ML-based analytics and management: The platform's AIOps capabilities are robust, delivered through a multilayered, integrated suite of tools. Active IQ provides fleet-wide telemetry and predictive health insights, Data Infrastructure Insights offers full-stack observability across heterogeneous environments, and BlueXP consolidates these capabilities into a unified control plane with automated workflows. This mature AIOps implementation moves beyond basic monitoring to proactively reduce risk and lower operational overhead, delivering a quantifiable ROI.

  • GPUDirect support: NetApp provides excellent, benchmark-validated support for NVIDIA GPUDirect Storage (GDS). The solution is certified for large-scale AI infrastructure like the NVIDIA DGX SuperPOD and has demonstrated top-tier performance, achieving 351 GiB/s read throughput in NVIDIA's tests. This makes NetApp's enterprise platform a highly credible solution for demanding AI training and inference workloads, traditionally the domain of specialized HPC vendors.

NetApp was classified as an Outperformer given its strong and clear roadmap for the coming year, particularly its strategic development of a disaggregated storage architecture and an integrated AI data platform with a native vector database, which could result in the vendor leaping forward in the market in the next year.

Opportunities
NetApp has room for improvement in a few decision criteria, including:

  • Object storage: While NetApp’s S3 implementation on its WAFL file system is technically superior for unified file and object access, it could be improved by incorporating advanced, data-centric API features seen in market-leading native object stores. Top-tier solutions offer deep analytics services and data warehousing capabilities directly through the object API. The opportunity for NetApp is to evolve its S3 offering beyond protocol compatibility to include these types of rich, platform-native analytics, which are increasingly critical for modern data lake and AI workloads.

  • Composable infrastructure: NetApp’s capabilities reflect a sound, forward-looking strategy but a product that is not yet generally available. The company’s plan to release a disaggregated architecture is a direct and necessary response to competitive pressure from modern "shared-everything" architectures. This architectural shift, which allows independent scaling of compute and capacity, is critical for future performance and cost-efficiency in large-scale AI environments. The opportunity is to execute on this roadmap and bring this platform to market, thereby closing a key architectural gap with its most innovative competitors.

  • Data management: The platform provides excellent data management with robust file analytics and policy-based lifecycle management. The opportunity for improvement lies in executing its roadmap to embed a vector database directly into ONTAP. The strategic importance of integrated vector databases for powering RAG applications is immense. Delivering this capability would transform the platform from a passive repository into an active, intelligent data infrastructure for AI.

Purchase Considerations
NetApp’s primary licensing model, ONTAP One, is an all-inclusive software suite that simplifies purchasing but may result in higher initial costs for customers who do not need the full feature set. For consumption-based pricing, NetApp Keystone offers a STaaS option that aligns costs with usage.

NetApp is unequivocally a Platform Play. Its solutions are designed as a comprehensive data management foundation for the entire enterprise, which is ideal for organizations looking to standardize operations but often requires a full displacement of incumbent solutions to realize its value.

Deployment complexity is mitigated by the BlueXP unified control plane, which uses AIOps-driven wizards and automation to streamline configuration. While BlueXP reduces the need for specialized skills for day-to-day operations, initial deployment of large-scale, hybrid environments will likely benefit from professional services to ensure optimal architecture. Migration from third-party systems can be complex, but NetApp provides tools and services to facilitate the process.

Use Cases
As a Platform Play vendor, NetApp supports most industry verticals, including financial services, healthcare, media, and the public sector. The platform excels at a wide range of use cases, from mission-critical enterprise applications like Oracle and SAP to large-scale virtualization with VMware. With exceptional GPUDirect support and a forward-looking AI roadmap, it is an increasingly strong fit for high-performance AI/ML data pipelines. Its industry-leading public cloud integration makes it a default choice for hybrid cloud data mobility, disaster recovery, and cloud bursting use cases.

Nutanix: Unified Storage

Solution Overview
Nutanix is a prominent vendor in the hybrid multicloud computing market, and its primary scale-out storage offering, Nutanix Unified Storage (NUS), is a software-defined data management platform that functions as an integrated pillar of the broader Nutanix Cloud Platform. NUS consolidates block, file (NFS/SMB), and object (S3-compatible) storage into a single, unified solution, eliminating traditional storage silos. It is architected to deliver a rich suite of data services, including data protection, analytics, and ransomware protection, with a consistent operational model. The platform is designed for deployment flexibility across the core data center, edge locations, and public clouds, including native deployments in AWS and Azure, with SKUs including Nutanix Unified Storage Starter and Pro.

Nutanix is positioned in the Maturity hemisphere. The solution's core platform will look and feel largely the same over the contract lifecycle, as the vendor prioritizes stability and continuity for its foundational infrastructure. This methodical approach is complemented by an aggressive innovation strategy for data services, which are often delivered as integrated but modular capabilities, such as the SaaS-based Nutanix Data Lens. This allows for rapid advancement without disrupting the underlying platform, justifying its pace of development.

Nutanix is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Nutanix scored well on a number of decision criteria, including:

  • Object storage integration: Nutanix provides a superior level of object storage integration. Its platform includes a native, S3-compatible object service as a core component, not a bolted-on gateway. This is powerfully extended by its ability to create a federated namespace that transparently spans on-premises Nutanix Objects and public cloud storage like AWS S3, enabling advanced hybrid cloud workflows and seamless data mobility.

  • Kubernetes support: The solution delivers a comprehensive experience for Kubernetes that extends far beyond a basic CSI driver. Through the Nutanix Data Services for Kubernetes (NDK), it unifies and simplifies the management of cloud-native applications by extending enterprise-grade data services (including application-consistent data protection, disaster recovery, and operational automation) directly to containerized workloads.

  • GPUDirect support: Support for GPU-accelerated workloads is solid, demonstrated by its validated NVIDIA GPUDirect Storage certification. This implementation provides a highly optimized, low-latency data path for demanding AI/ML applications by enabling direct, zero-copy data transfers between storage and GPU memory via NFS over RDMA, supported on both ESXi and AHV hypervisors.

Opportunities
Nutanix has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: While Nutanix provides a strong AIOps implementation using machine learning for predictive capacity planning and dynamic anomaly detection, it could improve by evolving these capabilities into a more fully autonomous operational model. The opportunity lies in enhancing the platform to provide proactive, AI-driven recommendations for performance tuning, automated remediation of complex issues, and prescriptive cost optimization across the entire hybrid cloud infrastructure, which would elevate it to an exceptional level.

  • Composable infrastructure: Nutanix offers a capable approach to composability through its disaggregated scaling model, which allows for the independent scaling of compute-only or storage-heavy nodes. The platform could improve by maturing this into a true API-driven composable framework that allows for the dynamic, programmatic composition and recomposition of granular resource pools to meet specific workload demands in real time, moving beyond node-level scaling to a more fluid resource model.

  • Data management: Although the platform provides a comprehensive suite of native data management tools, including data reduction and policy-based automation, unified across file, block, and object storage, the opportunity for improvement lies in expanding these capabilities to include more advanced, content-aware data governance. Enhancing the platform with richer metadata tagging, deeper search capabilities, and native integrations with third-party data classification and governance tools would allow customers to enforce enterprise-wide policies more effectively across the entire data lifecycle.

Purchase Considerations
Nutanix utilizes a transparent, capacity-based subscription model with a single, portable software license covering all unified storage services (files, objects, volumes). This license, offered in NUS Starter and Pro tiers, simplifies TCO forecasting and provides exceptional flexibility, as it can be redeployed across on-premises, edge, and cloud environments as business needs evolve. As a comprehensive Platform Play solution, NUS is designed to be a foundational data management layer. Decision-makers should recognize that achieving the full value proposition (particularly in operational simplicity and long-term TCO) will likely involve a strategic consolidation of incumbent point solutions. The solution is software-defined, requiring separate hardware procurement, though Nutanix maintains a broad hardware compatibility list and partners with OEMs for appliance-like deployment experiences. While the platform is designed for ease of use, organizations undertaking large-scale or complex migrations may benefit from optional professional services to ensure a smooth transition and optimized configuration.

Use Cases
As a Platform Play, Nutanix Unified Storage is a versatile solution supporting a wide array of use cases across nearly all industry verticals, including finance, healthcare, media, and the public sector. It is exceptionally well suited for organizations looking to consolidate storage infrastructure, support high-performance file workloads, manage large-scale video surveillance data, and provide a resilient target for backup and archiving. Furthermore, its strong performance and advanced data services make it an excellent fit for powering next-generation workloads, including AI/ML data pipelines, advanced analytics, and providing persistent, enterprise-grade storage for both object-native and containerized applications.

OSNexus: QuantaStor

Solution Overview
OSNexus provides QuantaStor, a software-defined storage (SDS) platform that delivers unified file, block, and object storage. The solution is built upon the open source Ceph file system and is packaged as an ISO image for deployment on certified commodity hardware, as a virtual appliance, or in the cloud. A core component is the QuantaStor storage grid, a distributed control plane that enables unified management of systems and clusters across multiple sites.

OSNexus pursues a general strategy, positioning QuantaStor as a comprehensive platform for a broad spectrum of enterprise use cases. This aligns with its classification as a Platform Play. The solution is placed in the Innovation hemisphere, reflecting its frequent release cadence and an aggressive roadmap that includes significant new features such as enhanced multitenancy for block storage and a universal container plugin. This approach values rapid advancement and responsiveness to market demands, ensuring the solution will evolve significantly over the contract lifecycle.

OSNexus is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
OSNexus scored well on a number of decision criteria, including:

  • Kubernetes support: The platform provides standout support for containerized workloads by leveraging the standard, full-featured Ceph-CSI driver. This strategy allows OSNexus to deliver robust, enterprise-grade capabilities for dynamic provisioning, snapshots, and cloning directly through Kubernetes APIs, inheriting best-in-class functionality from a mature open source project.

  • Object storage integration: QuantaStor exhibits a highly flexible approach to object storage, functioning as both a native S3-compatible platform for modern applications and as an intelligent gateway to other on-premises or cloud object stores. Its policy-based engine for tiering file data to object storage, with transparent access via stubs, provides a sophisticated data fabric that goes beyond simple archiving.

  • NVMe-oF and NVMe/TCP: The platform offers excellent implementation of modern storage fabrics, using NVMe-oF as a core architectural element. It supports both NVMe/TCP for low-latency client access and NVMe-oF over RDMA for its backend fabric, enabling the creation of composable storage infrastructure by integrating with disaggregated NVMe shelves from partners like Western Digital.

Opportunities
OSNexus has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: The platform currently has no AI/ML-based analytics or management capabilities. This is a significant gap compared to market leaders that offer mature AIOps platforms for predictive analytics, proactive maintenance, and autonomous operations. The absence of these features can increase operational overhead, especially in the large-scale environments QuantaStor is designed to serve.

  • GPUDirect support: This feature is not present in the QuantaStor platform. The vendor confirms it is investigating the technology but currently offers no support for direct, low-latency data transfer between storage and GPU memory. This limits the solution's applicability for performance-sensitive, large-scale AI/ML training workloads that rely on NVIDIA GPUDirect Storage to eliminate data pipeline bottlenecks.

  • Public cloud integration: The solution’s integration with public clouds is limited, primarily functioning as a mechanism to back up or archive on-premises data to a cloud object store. It lacks the advanced hybrid capabilities of a unified global namespace, cloud bursting, or a full-featured cloud-native deployment, which are characteristic of more deeply integrated platform solutions.

Purchase Considerations
OSNexus offers an exceptionally transparent and simple licensing model based on a capacity-based subscription, which includes all features, maintenance, and support with no hidden costs. This all-inclusive approach simplifies TCO forecasting and de-risks procurement. As a Platform Play, QuantaStor is a comprehensive solution best suited for organizations looking to consolidate multiple storage workloads or for greenfield deployments. While the platform’s underlying Ceph technology is complex, OSNexus effectively abstracts this complexity through a well-designed, GUI-driven management framework, making it accessible to IT generalists.

However, prospective buyers must exercise caution regarding upgradability. Vendor claims of nondisruptive upgrades are contradicted by the solution's own technical documentation, which details a manual and service-impacting process. This presents a significant operational risk and potential for hidden costs over the product lifecycle that must be carefully evaluated.

Use Cases
As a Platform Play vendor, QuantaStor is designed to support a broad set of use cases across multiple industries rather than focusing on specific verticals. It is well suited for organizations requiring high-capacity, scalable storage for workloads such as backup and archive, media asset management, and healthcare R&D. Its high-performance object storage capabilities make it a strong fit for AI/ML data repositories and large-scale analytics platforms. Additionally, its robust block storage protocols, including NVMe-oF, make it a capable backend for virtualized environments and container platforms like Kubernetes and OpenStack.

Pure Storage: FlashBlade and FlashArray*

Solution Overview
Pure Storage delivers a comprehensive data platform centered on its "Enterprise Data Cloud" vision, designed to unify the management of block, file, and object data across hybrid and multicloud environments. The platform is built on two primary hardware architectures: the scale-up FlashArray family for performance-sensitive block and file workloads, and the native scale-out FlashBlade family for high-throughput unstructured file and object data. These are managed through a sophisticated software layer comprising the Purity operating environment, the Pure1 AIOps platform for predictive management, and the Pure Fusion control plane for autonomous, policy-driven orchestration.

Pure Storage’s strategy is a broad, general-purpose Platform Play aimed at consolidating diverse enterprise workloads. The solution is positioned in the Innovation hemisphere, as Pure Storage consistently demonstrates an aggressive roadmap and a commitment to rapid software advancement. This forward-leaning approach is centered on disruptive software innovations in AIOps, generative AI, and autonomous orchestration that challenge established norms and transform storage management from a reactive to a proactive, automated discipline.

Pure Storage is positioned as a Leader and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Pure Storage performed well across a number of the decision criteria, including:

  • Object storage integration: The platform’s integration with object storage is exceptional, providing native, high-performance S3 protocol support on both its FlashBlade and FlashArray platforms. This approach elevates object storage from a simple archival tier to a first-class protocol for primary, performance-sensitive workloads like AI and analytics. The Unified Fast File and Object (UFFO) architecture on FlashBlade allows seamless, multiprotocol access to a single data pool, eliminating silos and simplifying modern data pipelines.

  • AI/ML-based analytics and management: Pure Storage is a market leader in AIOps, delivering a robust solution through its Pure1 platform and Meta AI engine. The system moves beyond basic predictive analytics to offer full AIOps capabilities, including workload simulation for planning, full-stack root cause diagnosis, and a generative AI copilot for natural language-based administration. This transforms storage management from a reactive to a proactive, autonomous operation, significantly reducing administrative overhead.

  • NVMe-oF and NVMe/TCP: The implementation of NVMe over Fabrics is solid, as the protocol is a core architectural element of the platform, not merely an add-on feature. The system provides comprehensive support for all major fabrics (NVMe/TCP, NVMe/RoCE, and NVMe/FC) and uniquely utilizes NVMe-oF for its own internal shelf-to-chassis connectivity, demonstrating a true end-to-end NVMe design that eliminates performance bottlenecks.

Opportunities
Pure Storage has room for improvement in a few decision criteria, including:

  • Public cloud integration: While the platform offers strong public cloud integration centered on Pure Cloud Block Store (CBS), which runs the identical Purity OS natively in AWS and Azure, there is an opportunity for enhancement. CBS provides perfect architectural consistency for block storage, but the platform lacks an equivalent first-party, Pure-managed service for native file and object workloads in the cloud. Expanding its cloud offerings to include a "Cloud FlashBlade" service would close this gap and deliver a more complete, unified hybrid cloud platform across all data types.

  • Composable infrastructure: The platform’s architecture is evolving toward the principles of composability but is not yet a fully composable system. The new FlashBlade//EXA platform introduces a disaggregated architecture, separating metadata and data nodes to enable independent scaling, which is a significant step forward. However, the Pure Fusion control plane orchestrates storage services rather than composing disaggregated physical hardware resources (compute, storage, networking) into virtual servers. The opportunity lies in extending Pure Fusion to integrate with and participate in broader composable ecosystems.

  • Data management: The platform provides strong, infrastructure-focused data analytics and management through Pure1 but relies on a partner-led strategy for content-aware data governance. While deep integrations with best-of-breed solutions like Varonis deliver comprehensive capabilities, the lack of native, built-in tools for advanced data classification, e-discovery, and automated metadata tagging represents an opportunity. Building these features directly into the platform would create a more seamless, single-vendor data intelligence solution.

Purchase Considerations
Pure Storage’s Evergreen//One subscription is a key purchase consideration, fundamentally altering the traditional procurement model. It provides a true STaaS offering with a transparent, publicly available service catalog and guaranteed SLAs for performance, uptime, and even energy efficiency. This shifts storage from a large CapEx investment to a predictable OpEx model, eliminating the risks of overprovisioning and the costs of future forklift upgrades.

As a Platform Play, the solution is designed for enterprise-wide workload consolidation. Its value is maximized when it displaces multiple legacy systems for file, block, and object storage. Deployment complexity is notably low, a core tenet of the company's value proposition, and the Pure1 AIOps platform significantly reduces ongoing administrative burden. While migration from legacy systems requires standard enterprise planning, the Evergreen model eliminates the need for future data migrations for technology refreshes, a significant long-term TCO advantage.

Use Cases
As a Platform Play vendor, Pure Storage supports a broad spectrum of use cases across nearly all industry verticals, including financial services, healthcare, and media and entertainment. The FlashBlade family, with its native scale-out UFFO architecture and support for technologies like NVIDIA GPUDirect Storage, is purpose-built for high-performance workloads such as AI/ML data pipelines, HPC, and rapid restore for cyber resilience. The FlashArray family excels at supporting mission-critical, latency-sensitive enterprise applications, including transactional databases and large-scale virtualization environments. The unified platform, managed by Pure1 and Pure Fusion, is ideal for organizations seeking to consolidate these diverse workloads and eliminate data silos.

Quantum: Myriad, StorNext

Solution Overview
Quantum addresses the scale-out storage market with a broad, dual-product portfolio designed to manage the entire unstructured data lifecycle. The company combines two distinct but complementary solutions: the mature, high-throughput StorNext platform for large-file workflows and the modern, all-flash Myriad platform for low-latency, IOPS-intensive applications. This strategy provides a comprehensive platform that covers a wide spectrum of use cases, from traditional media archives to emerging AI pipelines.

StorNext is an established hybrid file storage system, available as an appliance or software, built on a metadata controller architecture that excels at large, streaming file access and supports a vast range of storage tiers, including NVMe, HDD, object, and tape. In contrast, Myriad is a newer, software-defined, all-flash platform architected with microservices orchestrated by Kubernetes. It consists of Load Balancer Nodes, NVMe Storage Server Nodes, and a Deployment Node, all interconnected via a high-speed internal RDMA fabric, targeting modern, performance-sensitive workloads. Quantum’s approach prioritizes stability and continuity with the well-established StorNext platform while pursuing an aggressive roadmap for Myriad to address feature gaps in emerging areas, aligning with their innovative market position.

Quantum is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Quantum scored well on a number of decision criteria, including:

  • Public cloud integration: The portfolio delivers a standout hybrid cloud strategy by combining two distinct approaches. The mature StorNext platform provides deep integration and a unified namespace across on-premises and cloud environments. This is complemented by FlexSync engine, which enables robust, policy-driven replication of data and metadata to any S3-compatible cloud target, facilitating disaster recovery and data mobility workflows.

  • Object storage integration: The combined solution provides a multifaceted and highly effective integration with object storage. StorNext’s mature FlexTier service allows for advanced, transparent tiering that treats object storage as a seamless extension of the file system namespace. Myriad adds value with simple, efficient replication to on-premises or cloud object stores via FlexSync, creating a powerful ecosystem for both active archiving and data protection use cases.

  • NVMe-oF and NVMe/TCP: Quantum capably addresses the need for low-latency fabric performance, albeit with a nonstandard approach on its modern platform. Myriad employs a proprietary RDMA fabric that delivers the performance benefits of NVMe/RoCE with simplified deployment, avoiding client-side configuration complexity. 

Opportunities
Quantum has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: The portfolio currently lags behind expectations in providing the predictive, AI-driven AIOps capabilities that are becoming standard in the market. The existing Cloud-Based Analytics portal offers only basic resource utilization monitoring and lacks the proactive, intelligent management features needed to reduce administrative overhead and provide deep operational insights for large-scale environments.

  • Data management: The platform's native data management capabilities are limited, focusing on infrastructure-level metrics rather than content-aware intelligence. It lacks foundational data reduction on Myriad and native tools for automated data classification, tagging, and governance across the portfolio. Integrating these capabilities, which are on the Myriad roadmap, would significantly enhance the platform's value for analytics, compliance, and data security use cases.

  • GPUDirect support: While this feature is not yet generally available (GA), Quantum has publicly addressed this competitive gap by announcing support for NVIDIA GPUDirect Storage, with a GA release targeted for the second half of 2025. The planned implementation, which enables GPU-equipped workstations to operate as native Myriad nodes, represents an innovative approach to creating a direct data path for AI/ML workloads. Delivering this announced functionality will be a critical opportunity for Quantum to validate its AI strategy and will significantly strengthen its competitive position for modern analytics and data pipeline workloads.

Purchase Considerations
Prospective buyers should evaluate Quantum as a portfolio of specialized solutions rather than a single, unified platform. The company offers a transparent, capacity-based subscription model for both Myriad and StorNext; however, customers may need to license and deploy two separate products to access the full suite of capabilities described in this report.

While positioned as a Platform Play due to the breadth of its combined offerings, its implementation is a portfolio of two distinct products. This requires customers to carefully map their specific workloads to the correct solution—StorNext for high-throughput archiving and large-file streaming, and Myriad for low-latency, high-IOPS applications. This contrasts with competitors offering a single, general-purpose system. Deploying and managing two separate systems with different UIs and architectural principles can increase operational overhead and complexity but may provide choice and options, depending on the use case. Professional services may be required to effectively integrate both platforms into a cohesive workflow, a factor that should be considered in the total cost of ownership.

Use Cases
The platform's strength lies in its ability to serve two distinct, high-value market segments. StorNext is a purpose-built solution that is dominant in the media and entertainment vertical for demanding, large-file streaming workflows such as video post-production, broadcasting, and large-scale archives. It is also strong in life sciences, satellite imaging, and other research fields that generate massive sequential files.

Myriad is targeted at emerging, performance-sensitive workloads. Its all-flash architecture is designed for AI/ML data pipelines, animation and VFX rendering, life sciences and bioinformatics, and consolidating legacy NAS workloads onto a high-performance platform. The combination of the two solutions can be powerful for organizations with diverse needs. For example, a media company could use Myriad for high-speed, small-file rendering and VFX work while using StorNext to manage the final 8K masters and archive completed projects, demonstrating the breadth of the platform play.

Qumulo: Cloud Data Platform

Solution Overview
Qumulo is a software-defined storage vendor focused on providing a unified, hybrid cloud data platform for managing unstructured data at scale. The company’s primary offering is the Qumulo Cloud Data Platform, a modular suite of integrated products designed to deliver a consistent feature set and user experience regardless of where data resides. Key components of the platform include Qumulo Core for on-premises deployments on commodity hardware; Cloud Native Qumulo (CNQ) for customer-managed instances on AWS, Azure, GCP, and OCI; and Azure Native Qumulo (ANQ), a fully managed SaaS offering. These are unified by the Qumulo Cloud Data Fabric, which creates a single, globally consistent namespace.

Qumulo employs a general, platform-centric strategy, reflected in its “Run Anywhere” philosophy. This approach is designed to support any unstructured data workload across all major protocols (NFS, SMB, S3) with full feature parity, which is the hallmark of a Platform Play. The solution is positioned in the Innovation hemisphere due to its fundamentally cloud-native architecture, which is designed for elasticity and rapid advancement. The vendor delivers monthly software updates and maintains an aggressive roadmap that includes significant architectural shifts like Qumulo Stratus for on-premises object storage and AI-enabled data classification, indicating the solution will evolve significantly over the contract lifecycle.

Qumulo is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Qumulo scored well on a number of decision criteria, including:

  • Public cloud integration: Qumulo’s “Run Anywhere” philosophy is fully realized in its exceptional public cloud integration. The platform’s architecture is truly cloud-native, decoupling compute and storage to allow for independent, elastic scaling of both capacity and performance on AWS, Azure, GCP, and OCI. The Cloud Data Fabric extends this by creating a unified namespace across on-premise, edge, and cloud, enabling seamless hybrid and multicloud operations that align with the best solutions in the market.

  • Kubernetes support: The platform provides a robust and well-featured CSI driver that goes beyond basic persistent volume provisioning to support dynamic operations like volume sizing and expansion. This enables modern containerized applications to consume storage with the same enterprise-grade features as traditional workloads, eliminating the need to move or copy data and simplifying Kubernetes workflows.

  • Composable infrastructure: Qumulo's cloud-native architecture is a leading example of the principles of composable infrastructure in practice. By disaggregating compute (virtual machine instances) and storage (cloud object storage) resources, it allows them to be scaled independently and composed dynamically via software to meet specific workload demands. This is an inherent property of its design, demonstrating a strong alignment with this emerging trend.

Opportunities
Qumulo has room for improvement in a few decision criteria, including:

  • Object storage integration: The platform’s on-premises offering currently lacks the deep, foundational integration with object storage that defines its cloud-native products. On-premises object storage is treated primarily as a replication target rather than the primary persistence layer. While the long-term strategy with Qumulo Stratus is to align the on-premises architecture with the cloud model, this disparity creates a significant architectural gap today.

  • Data management: The platform’s native data management capabilities are limited. It lacks advanced, content-aware features such as automated data classification, policy-based lifecycle management, and integrated governance tools. For a solution positioned as a comprehensive platform, this absence is a notable weakness, forcing customers to rely on third-party integrations for critical data insight and control functions.

  • AI-driven cyber resilience orchestration: Qumulo provides the necessary building blocks for cyber resilience, such as immutable snapshots and a rich API, but the "AI-driven" intelligence and orchestration are delivered via third-party partners like Superna and Varonis. This reliance on external tools rather than a deeply integrated native capability means the solution lags behind market leaders who are building predictive threat detection and automated response directly into their platforms.

Purchase Considerations
Qumulo's licensing is a subscription model based on capacity (per TB per month), which offers a degree of transparency and predictability. However, for cloud deployments, customers are billed separately for the underlying cloud infrastructure, including compute instances and object storage fees, which can complicate TCO calculations. The fully managed Azure Native Qumulo offering simplifies this by bundling costs into a single fee. The solution is a Platform Play, best suited for large enterprises looking to consolidate file data across a hybrid cloud estate. Its value is realized when used to displace multiple incumbent systems, not as a point solution for a single feature.

The platform is squarely aimed at large enterprises; its feature set and scalability are likely not cost-effective for most SMBs. Deployment is designed for simplicity, with cloud instances deployable in minutes via templates and a straightforward software installation for on-premises clusters. The intuitive, consistent UI across all deployment types, combined with a highly rated customer success program, reduces the administrative burden and training requirements. Migration from legacy systems requires careful planning, as with any platform replacement, though Qumulo's broad protocol flexibility can ease the transition.

Use Cases
As a Platform Play vendor, Qumulo supports nearly all industry verticals and unstructured data use cases. Its ability to serve data via NFS, SMB, and S3 protocols with full feature parity across on-premises, cloud, and edge environments makes it a general-purpose solution. The platform is proven in demanding verticals like media and entertainment for production workflows, healthcare for medical imaging and genomics, life sciences for pharmaceutical discovery, financial services, and AI research. Any workflow that requires scalable, high-performance file access across a hybrid environment is a strong fit for the Qumulo Cloud Data Fabric.

Quobyte: Quobyte

Solution Overview
Quobyte provides a software-defined, high-performance scale-out storage solution built on a parallel, distributed, POSIX-compliant file system. Its shared-nothing architecture is a key differentiator, running entirely in user-space on any commodity x86 or ARM server without requiring kernel modules or custom network drivers. This design simplifies deployment, enhances stability, and enables nondisruptive operations and upgrades. The platform offers a unified namespace that supports a wide range of file (NFS, SMB) and object (S3) protocols, allowing seamless data access across diverse environments and eliminating data silos.

Quobyte’s strategy is a general, platform-centric approach, addressing a broad spectrum of storage needs from high-performance computing (HPC) to enterprise file sharing. The solution will look and feel different over the contract lifecycle. Quobyte delivers an aggressive roadmap, is flexible and responsive to the market, and values rapid advancement and frequent updates. This is evidenced by recent releases that have added significant new capabilities, including ARM architecture support, enhanced cloud interoperability with tiering and copy functions, and integrated S3 versioning and object locking.

Quobyte is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart. 

Strengths
Quobyte scored well on a number of decision criteria, including:

  • Kubernetes support: The solution offers comprehensive integration that goes beyond a basic CSI driver. Quobyte can run its entire storage cluster within Kubernetes using a Helm Chart, and its full-featured CSI plugin supports dynamic provisioning, volume expansion, snapshots, and ReadWriteMany (RWX) volumes. This capability is tightly integrated with the platform’s multitenancy architecture, allowing Kubernetes namespaces to be mapped directly to Quobyte tenants for true logical isolation and self-service storage management in containerized environments.

  • Object storage integration: Quobyte’s platform is built on a unified namespace where a file is an object and vice versa, allowing data written via one protocol to be immediately and consistently accessed by another. Its S3 gateway is a custom, high-performance implementation, not a simple open source add-on. This deep integration allows the system to enforce unified, hierarchical file system access control lists (ACLs) on S3 object requests, providing more granular and consistent security than typical object stores.

  • Data management: The platform’s data management capabilities are driven by its powerful Policy Engine and a unique File Query Engine. The Policy Engine provides granular, automated control over data placement, tiering, redundancy schemes (replication versus erasure coding), and lifecycle policies. The File Query Engine enables real-time, SQL-like queries directly against all file system metadata, including custom user-defined tags. This allows for rapid data classification and discovery for use cases like AI/ML data labeling without the overhead of a separate database.

Opportunities
Quobyte has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: The solution does not offer the AI/ML-based predictive analytics or autonomous AIOps capabilities that are becoming more common across the market. This is a deliberate philosophical choice by the vendor, which prioritizes inherent architectural resilience to treat component failures as normal, nondisruptive operational events, thereby reducing the perceived need for predictive maintenance.

  • GPUDirect support: The platform currently lacks native support for NVIDIA GPUDirect Storage (GDS), a key technology for creating an optimized, direct data path between storage and GPU memory. While GDS support is on the roadmap, Quobyte’s current strategy is to deliver exceptional performance for AI workloads through its highly optimized support for RDMA networking (RoCE, InfiniBand), which has been validated by its leading results in the demanding MLPerf storage benchmark.

  • Composable infrastructure: The platform does not support the disaggregation of hardware resources that defines composable infrastructure. This is another intentional architectural decision reflecting a focus on maximizing TCO and operational simplicity by leveraging standard, aggregated commodity servers rather than the potential complexity and cost of disaggregated hardware models.

Purchase Considerations
Quobyte offers a straightforward, subscription-based licensing model based on capacity, with all features and support included. This transparent approach simplifies TCO calculations and provides predictable costs for expansion. As a Platform Play solution, Quobyte is designed to be a comprehensive data platform for consolidating diverse file and object workloads, which offers significant benefits but may require displacing incumbent point solutions to achieve maximum value.

Deployment on commodity hardware avoids vendor lock-in and can lower capital expenditures. This aligns with the platform’s high marks for ease of use and upgradability, which are direct results of its user-space, API-first design that simplifies installation, automates management, and enables fully nondisruptive upgrades. Buyers should note the platform’s deliberate lack of data reduction features like deduplication and compression, a trade-off made to maximize raw performance. While this may result in a larger storage footprint, costs can be effectively managed via the platform's intelligent, policy-driven tiering to more cost-effective media.

Use Cases
As a Platform Play vendor, Quobyte supports a broad range of industry verticals and general-purpose enterprise use cases, including large-scale file sharing and active archives. The solution is particularly well suited for performance-intensive workloads where its parallel architecture and low-latency performance excel.

This makes it a strong fit for verticals such as AI/ML, where its benchmark-validated performance and metadata query capabilities accelerate data ingestion and training pipelines. It is also ideal for HPC environments in financial services, life sciences, and research that depend on high scalability and RDMA support. In media and entertainment, the platform’s high-throughput streaming and multi-protocol access are beneficial for demanding workflows like visual effects rendering and collaborative video editing.

Scality: RING, ARTESCA

Solution Overview
Scality is a global provider of software-defined storage (SDS) solutions, specializing in cyber-resilient, distributed file and object storage for large-scale, data-intensive environments. The company's portfolio is a suite of integrated products rather than a single offering. The foundation is Scality RING, a petabyte-scale object storage platform first released in 2010 that provides the core S3 and file (NFS, SMB) access capabilities. This is complemented by ARTESCA, a modern, lightweight, and cyber-resilient storage appliance often targeted at backup and modern application use cases. Scality's strategy is a broad Platform Play, moving beyond its object storage roots to address a wide array of unstructured data challenges, including backup targets, big data analytics, and AI data lakes. This report positions the solution in the Maturity half of the Radar, reflecting the long-standing, battle-tested architecture of the core RING platform. Scality prioritizes the stability and methodical evolution of this proven foundation, ensuring continuity and reliability for its enterprise customers. Innovations such as ARTESCA are significant but represent incremental improvement and assured compatibility over disruptive, ground-up redesigns.  

Scality is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
Scality scored well on a number of decision criteria, including:

  • Public cloud integration: The platform's capabilities in this area are superior, driven largely by an integrated multicloud data controller internal to RING and ARTESCA. Through this, the product provides a unified namespace that allows for transparent data access and intelligent, policy-based data placement across on-premises Scality RING deployments and public clouds like AWS, Azure, and Google Cloud. This creates a true hybrid cloud data fabric, enabling advanced operations such as cloud bursting for compute, automated disaster recovery, and tiering to archival cloud services, all managed through a single S3 API interface.  

  • Kubernetes integration: Scality demonstrates a forward-looking approach to containerized workloads. Instead of only providing a standard CSI driver, which is primarily designed for block and file volumes, the company actively champions and contributes to the emerging Container Object Storage Interface (COSI) standard. This is a critical distinction, as COSI is purpose-built to manage object storage resources like buckets and credentials declaratively within Kubernetes. This approach represents a more appropriate and architecturally sound model for object-based workloads, positioning Scality as a thought leader in enabling modern, cloud-native applications.  

  • Object storage integration: This is a foundational strength, as Scality’s platform is a native object store at its core, not a file system with an added S3 gateway. Both RING and ARTESCA are built as S3-compatible platforms, providing standout support for the S3 API, including advanced features like object locking for immutability, versioning, and lifecycle management. This native architecture provides superior scalability and data durability (up to 14 nines through erasure coding), making it a robust and highly compatible foundation for modern applications built on an S3-first paradigm.  

Opportunities
Scality has room for improvement in a few decision criteria, including:

  • Data management: The platform provides capable tools for managing the data container, including a sophisticated policy-based engine for data movement and tiering. However, it has room to improve in providing insight into the data's content. The solution lacks the advanced, content-aware features of leading platforms, such as native AI-driven data classification, automated metadata tagging, or integrated security risk assessment tools. Adding these capabilities would elevate the platform from a passive repository to an active data intelligence and governance tool.  

  • GPUDirect support: Scality currently has limited support for NVIDIA GPUDirect Storage (GDS), a key technology for accelerating AI/ML workloads. This is due to the fundamental protocol mismatch between the HTTP-based S3 API and the direct memory access required by GDS. However, the company is actively engaged in innovative R&D on a novel S3 extension called "ObjectMap" to bridge this gap. While this work demonstrates a clear vision, it is not yet a production feature. Delivering this capability would unlock significant performance for the demanding AI/ML market, which is currently dominated by high-performance parallel file systems.

  • Composable infrastructure: The platform's architecture aligns well with the principles of composability, as it naturally disaggregates compute (S3 connectors) and capacity (storage nodes), allowing them to be scaled independently. This provides a capable implementation of resource pooling. The opportunity lies in building a more sophisticated, API-driven orchestration layer on top of this foundation. This would enable the dynamic, software-defined composition and provisioning of resources, delivering a more cloud-like, programmable infrastructure experience that goes beyond the current architectural separation.

Purchase Considerations
Scality offers predictable, capacity-based subscription licensing for its software-defined solutions, which simplifies budgeting for enterprise customers. As an SDS provider, the total cost of ownership must include the procurement of certified commodity hardware from partners like HPE. The solution is a Platform Play, primarily targeting large enterprises with petabyte-scale data challenges. It is not designed for the SMB market. Adopting Scality is a strategic decision that often requires a greenfield deployment or displacement of incumbent solutions to realize its full value as a consolidated platform for multiple unstructured data workloads.  

The need for professional services varies across the portfolio. The flagship RING product, with its power and complexity at scale, often benefits from Scality's installation, training, and support services, which should be factored into the TCO. In contrast, the ARTESCA appliance is designed for greater simplicity, with an intuitive UI that lowers the barrier to entry for specific use cases like Veeam backups, requiring no deep Linux expertise.

Use Cases
As a Platform Play vendor, Scality supports a broad range of use cases across multiple industries rather than focusing on a narrow niche. The common thread is the need to manage massive volumes of unstructured data with high durability and cost-efficiency. Its strong cyber resilience features, such as S3 Object Lock and immutability, make it an excellent choice for backup modernization projects, particularly as a backup target for applications like Veeam. The platform's native S3 architecture and performance make it a natural fit for big data and analytics workloads, serving as a scalable data lake backend for platforms like Splunk SmartStore and Apache Spark. It is also widely used by cloud service providers to build their own public or private S3-as-a-Service offerings.

ThinkParQ: BeeGFS

Solution Overview
ThinkParQ develops BeeGFS, a parallel file system designed to provide scalable, high-throughput storage access for high-performance computing (HPC), AI, and other demanding environments. BeeGFS is a hardware-independent, software-defined storage (SDS) solution available in both open source and enterprise versions. The solution is a single, integrated file system, which, through a variant called BeeOND (BeeGFS On-Demand), can be deployed temporarily on compute nodes for burst-buffer workloads.

ThinkParQ takes a focused approach, primarily targeting HPC and AI/ML use cases, which aligns with its Feature Play designation. The vendor prioritizes innovation, delivering an aggressive roadmap and frequent updates. This is evidenced by the recent major release of BeeGFS 8, which introduced a rewritten management service and command-line interface, along with a new data management suite. The company continues to innovate by expanding its cloud offerings and planning support for IPv6 and SELinux.

ThinkParQ is positioned as a Challenger and Fast Mover in the Innovation/Feature Play quadrant of the scale-out storage Radar chart.

Strengths
ThinkParQ scored well on a number of decision criteria, including:

  • Object storage integration: With the release of BeeGFS 8, the solution now integrates with all kinds of S3-based storage, from on-premises object systems to cloud services and even tape. This allows organizations to leverage economical, scalable object storage as a tier within the BeeGFS namespace, optimizing data placement based on cost and access patterns.

  • Kubernetes support: BeeGFS provides a native CSI driver, enabling seamless integration with container orchestration platforms like Kubernetes. This support is crucial for organizations adopting cloud-native architectures, as it simplifies persistent storage management for containerized workloads and improves the agility of modern application deployments. 

  • GPUDirect support: The solution features strong support for GPUDirect Storage (GDS), which enables direct data transfer between storage and GPU memory, bypassing the CPU. This capability is critical for accelerating high-performance computing and AI/ML workloads by significantly reducing latency and increasing data throughput for GPU-accelerated applications. 

Opportunities
ThinkParQ has room for improvement in a few decision criteria, including:

  • Public cloud integration: While customers can run BeeGFS in the cloud, the solution currently lacks deep, native integration with the major public cloud platforms. ThinkParQ acknowledges this is an area for improvement. Developing a full-featured, cloud-native version of its software with a unified control plane would enhance its capabilities for hybrid and multicloud strategies.

  • AI/ML-based analytics and management: The solution does not currently offer AI/ML-based analytics for system management and does not have it on the short-term roadmap. Incorporating predictive analytics and AI-driven recommendations could help optimize performance, forecast capacity needs, and automate issue resolution, reducing administrative overhead in large-scale environments. 

  • Data management: BeeGFS relies on integration with external third-party tools for policy-based data placement and lifecycle management. Building more comprehensive native data management capabilities (such as automated data lifecycle policies and advanced metadata tagging) would streamline operations and provide greater out-of-the-box value.

Purchase Considerations
ThinkParQ offers a transparent subscription licensing model for BeeGFS, with pricing based on the number of storage and metadata servers under contract. This approach can simplify budgeting for organizations. As a Feature Play solution, BeeGFS is designed for specific high-performance use cases and is often integrated into a broader, best-of-breed storage ecosystem rather than serving as an all-encompassing platform.

The solution is highly specialized for HPC and AI workloads. While ThinkParQ claims BeeGFS is the easiest parallel file system to operate, organizations without experience in HPC environments may face a steeper learning curve compared to more general-purpose storage solutions. Migration from traditional file systems requires careful planning due to the architectural differences of a parallel file system. Although professional services are optional, they are likely beneficial for complex deployments to ensure optimal performance and configuration.

Use Cases
As a Feature Play vendor, ThinkParQ targets specific industry verticals and use cases that demand high-throughput and massively parallel data access. Key industries include life sciences, research institutions, energy, media and entertainment, and financial services.

The solution is ideal for workloads such as large-scale scientific simulations, AI model training and inference, genomic sequencing, seismic data processing, and high-resolution video rendering. Its architecture is built to handle the extreme performance requirements of these data-intensive applications.

TrueNAS

Solution Overview
TrueNAS, formerly known as iXsystems, is a long-standing proponent of open source enterprise storage solutions, with its primary focus centered on the TrueNAS portfolio. The scale-out offering, TrueNAS SCALE, is a single, software-defined solution that provides unified file, block, and object storage. It achieves its platform breadth by building upon a robust ZFS foundation and integrating best-of-breed open source components, such as MinIO for S3-compatible object storage and the "democratic-csi" driver for Kubernetes. TrueNAS’s strategy is to deliver a flexible, hardware-agnostic platform that offers a compelling TCO and freedom from vendor lock-in.

As an innovator, the TrueNAS solution will look and feel different over the contract lifecycle. TrueNAS delivers an aggressive roadmap driven by the rapid evolution of the open source projects it incorporates, valuing frequent updates and responsiveness to the market. This approach allows it to quickly integrate new capabilities developed by a wide community, though it prioritizes this flexibility over the seamlessness of a fully proprietary stack.

TrueNAS is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
TrueNAS scored well on a number of decision criteria, including:

  • Kubernetes support: Kubernetes support is exceptional, delivered through the officially sponsored and full-featured "democratic-csi" driver. It provides a complete suite of enterprise-grade data services (including dynamic provisioning, volume expansion, snapshots, and cloning) all managed via standard Kubernetes APIs, representing a best-in-class implementation for containerized workloads.

  • Public cloud integration: The platform offers capable public cloud integration through its "Cloud Sync" feature, providing robust data replication and migration for essential hybrid cloud backup and disaster recovery strategies. While it does not provide a unified global namespace, it meets expectations for policy-based data movement, offering a practical solution for common hybrid use cases.

  • Object storage integration: TrueNAS provides S3-compatible object storage by integrating the industry-standard MinIO platform as a containerized application. While this approach lacks the deeper, unified namespace of some native implementations, it delivers a capable and familiar object store for specific use cases like backup targets and application data.

Opportunities
TrueNAS has room for improvement in a few decision criteria, including:

  • GPUDirect support: The platform does not yet support NVIDIA GPUDirect Storage. This is a significant gap for organizations running high-performance AI/ML training workloads, which rely on this technology for an optimized, low-latency data path directly to GPU memory.

  • Composable infrastructure: TrueNAS does not offer a composable infrastructure model. Its TrueNAS SCALE architecture is hyperconverged, integrating storage and compute on the same node, which is a fundamentally different paradigm from the disaggregated and dynamically pooled resources that define composability.

  • NVMe-oF and NVMe/TCP: While TrueNAS has introduced NVMe-oF and NVMe/TCP client access in a beta release, the feature is not yet generally available. This lack of production-ready status means that customers seeking to deploy modern, low-latency storage fabrics for mission-critical workloads must either wait for a future release or accept the risks associated with non-production software.

Purchase Considerations
Cost transparency is a strength for TrueNAS, rooted in its open source business model. The availability of a free, community-supported version of TrueNAS SCALE, combined with an all-inclusive pricing structure for enterprise hardware appliances, eliminates hidden fees and provides a clear, predictable TCO. The solution is effectively productized as a platform, but buyers should understand that this breadth is achieved by integrating distinct open source components. This makes it an ideal fit for technically proficient organizations that value a vendor-supported open source stack over a proprietary, single-pane-of-glass experience. The solution is well suited for both SMBs and large enterprises that prioritize flexibility and long-term cost savings. While the management UI is modern and capable, the platform's inherent flexibility can introduce complexity, and professional services can be valuable for initial deployment and performance tuning.

Use Cases
As a Platform Play vendor, TrueNAS supports a broad range of general-purpose enterprise use cases. It is an excellent fit for organizations requiring robust and scalable file services, virtualization storage for platforms like VMware and Proxmox, and high-performance storage for containerized applications, where its Kubernetes support is a key differentiator. The integration of MinIO also makes it a strong candidate for use as a backup and archive target. While it can serve high-performance workloads, it is less suited for cutting-edge AI training and HPC environments where specific features like GPUDirect and NVMe-oF are critical requirements.

VAST Data: VAST AI Operating System

Solution Overview
VAST Data provides the VAST AI Operating System, a comprehensive software platform designed to unify and accelerate the entire AI data pipeline. The platform is built on VAST Data's foundational Disaggregated, Shared Everything (DASE) architecture, which separates compute logic from storage media, enabling independent scaling and parallel access for all compute nodes to all storage devices. The VAST AI Operating System is a single, integrated product composed of multiple software subsystems, including the VAST DataStore for all-flash persistence, VAST DataSpace for a global namespace, and the VAST DataBase for a transactional data lakehouse with native vector support. VAST Data pursues a broad platform strategy, aiming to consolidate the entire AI data stack. The solution will look and feel different over the contract lifecycle. VAST Data delivers an aggressive roadmap with a rapid release cadence, valuing frequent updates and advancement to address the fast-moving AI market.

VAST Data is positioned as a Leader and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
VAST Data scored well on a number of decision criteria, including:

  • Kubernetes support: The platform provides an exceptional, enterprise-grade integration with Kubernetes through a full-featured, open source CSI driver. This implementation moves beyond basic persistent volume provisioning to expose the platform's rich data services, such as multitenancy and quality of service (QoS), directly to containerized applications.

  • Object storage integration: The solution’s implementation stands out because S3 is a native first-class protocol and not a performance-limiting gateway. It stands as a peer to NFS and SMB, providing transparent, high-performance access to a single, unified data pool across all protocols, which eliminates data silos and performance bottlenecks.

  • GPUDirect support: VAST Data's support for NVIDIA GPUDirect Storage (GDS) is solid, validated by deep partnership certifications and real-world benchmarks. The platform’s optimized, zero-copy data path over RDMA is a cornerstone of its value proposition for high-performance AI, allowing GPUs to directly access storage and enabling massive, linearly scaling performance.

Opportunities
VAST Data has room for improvement in a few decision criteria, including:

  • Public cloud integration: The platform offers a capable hybrid cloud solution with its VAST DataSpace global namespace, enabling a unified view across on-premises and cloud deployments. However, current architectural limitations in public clouds, specifically the lack of shared NVMe-oF infrastructure, prevent the full replication of its on-premises DASE architecture, performance, and scalability.

  • AI-driven cyber resilience orchestration: The platform provides a capable foundation for cyber resilience, with native machine learning-based threat detection and an event-driven engine. However, it currently relies on third-party partnerships for full, automated response orchestration, indicating that its native capabilities in this emerging area are still developing.

  • Data management: The platform provides solid data management capabilities by combining the VAST Catalog for deep metadata analytics with the VAST DataEngine for event-driven automation. This creates a powerful framework for active data governance, but the opportunity for improvement lies in maturing this from a developer's toolkit into a more turnkey solution with prepackaged workflows for data classification and lifecycle management.

Purchase Considerations
The VAST AI Operating System is licensed via an annual subscription based on usable capacity. Under its Gemini business model, this is bundled with hardware support. A key differentiator is that the software license is transferable, decoupling it from the hardware lifecycle and eliminating costly forklift upgrades, which provides a significant TCO advantage. However, compute-intensive workloads using the VAST DataBase or DataEngine may require additional per-core licenses. VAST Data is a definitive Platform Play, designed to consolidate file, object, and database workloads into a single system, making it best suited for large enterprises, cloud service providers, and AI-focused organizations. The platform is designed for operational simplicity, abstracting away complex storage management tasks like RAID configuration, and as a result, there are no required professional services for installation.

Use Cases
As a Platform Play vendor, the VAST AI Operating System supports a broad array of data-intensive use cases across all major industry verticals. The solution excels in use cases that demand extreme performance and massive scalability, including AI/ML training and inference, high-performance computing (HPC), media and entertainment production pipelines, enterprise backup and recovery, and large-scale data analytics. With the native integration of a vector database in its VAST DataBase component, the platform is also exceptionally well suited to power emerging generative AI and RAG applications from a single, unified data source.

VDURA: VDURA Data Platform

Solution Overview
VDURA is a data storage and management vendor that has strategically repositioned itself from its legacy as a niche hardware appliance provider for high-performance computing (HPC) to a modern, software-centric company. This transformation is underscored by a significant increase of over 100% in engineering investment and a shift to a subscription-based software model. The core offering is the VDURA Data Platform, an innovative storage solution built on the mature PanFS parallel file system. It unifies a high-performance file system and a cost-efficient object store under a single namespace and control plane. The platform's capabilities are extended through an optional, separately licensed suite of products: PanMove for advanced data movement and PanAnalytics for data analysis.

VDURA's strategy is to deliver a comprehensive platform for AI and HPC workloads. This is supported by an aggressive innovation cycle, with the company moving from infrequent major updates to a quarterly release schedule. As a vendor focused on innovation, the solution will look and feel different over the contract lifecycle. VDURA delivers an aggressive roadmap and is flexible and responsive to the market, valuing rapid advancement and frequent updates. This is evidenced by its ongoing work to reengineer the platform from a bare metal implementation to a modern microservices architecture, a foundational step that will enable future capabilities like composability.

VDURA is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the scale-out storage Radar chart.

Strengths
VDURA scored well on a number of decision criteria, including:

  • Public cloud integration: The platform shows a clear commitment to supporting hybrid cloud environments. While current integration for data migration and synchronization with major cloud providers like AWS, Azure, and GCP is facilitated by the separate PanMove Advanced tool, VDURA has a native Cloud Edition on its roadmap for the first half of 2025. This planned release signals a significant investment in delivering a more seamless, integrated hybrid cloud fabric.

  • Kubernetes integration: VDURA provides a native CSI driver that supports foundational persistent volume provisioning. This is a crucial capability that meets the basic requirements for running modern, containerized workloads in cloud-native environments. The company has a clear path to improving this functionality, with advanced data services like snapshot provisioning and management planned for the second half of 2025.

  • Object storage integration: The platform is built on a strong internal architecture that includes an object layer and provides a native S3 endpoint. This allows modern, cloud-native applications to access data directly using the S3 protocol, which is a significant advantage. Although integration with external object repositories for tiering is currently handled by a separate tool, the native S3 access is a robust feature.

Opportunities
VDURA has room for improvement in a few decision criteria, including:

  • AI/ML-based analytics and management: The platform's current AI/ML-based management capabilities are limited to predictive maintenance for hardware components like storage devices and power supply units. The solution could be improved by executing on its long-term roadmap to deliver the AIOps features (such as predictive analytics for performance or capacity, automated root cause analysis, and autonomous optimization) that are increasingly expected in enterprise platforms.

  • GPUDirect support: The platform does not currently support NVIDIA GPUDirect Storage, a critical feature for accelerating performance in the most demanding AI and HPC workloads. For a vendor competing directly with solutions that excel in this area, this is a notable gap. The vendor could significantly improve its competitive position in the high-end AI training market by delivering on this capability, which is currently on its roadmap for the second half of 2025.

  • Composable infrastructure: While VDURA is taking the necessary foundational steps by reengineering its platform to a microservices-based architecture, it does not yet offer a composable solution. True dynamic composition of disaggregated resources is a distant, long-term roadmap vision for the second half of 2026. Delivering on this vision would represent a major architectural advancement and a significant improvement for the platform.

Purchase Considerations
VDURA has transitioned to a transparent software-centric subscription model, with licensing based on a per-terabyte capacity metric. However, customers should note that advanced data services, such as data movement and analytics, are delivered via the optional PanMove and PanAnalytics suites, which are licensed separately and could lead to a more complex TCO calculation.

The solution is an emerging Platform Play, best suited for organizations with demanding HPC and AI workloads that are buying into the vendor's forward-looking vision and are willing to grow with the platform. It is designed to be a strategic part of the data infrastructure rather than a simple drop-in feature enhancement.

Deployment is now more flexible due to the software-defined model running on certified hardware platforms. VDURA provides free remote training, but most customers opt for the paid professional services for on-premises installation, which should be factored into procurement planning. Migration from traditional, nonparallel file systems will require careful planning and expertise.

Use Cases
As a Platform Play vendor, VDURA supports a range of industry verticals with high-performance storage requirements. These include manufacturing, academic research, life sciences, energy, federal government, and financial services. The platform’s core parallel file system architecture is the key enabler for its primary use cases.

The VDURA Data Platform is purpose-built for data-intensive applications such as AI model training, large-scale scientific simulations, genomics research, and complex financial modeling. Its ability to deliver high throughput and handle metadata-intensive operations makes it a strong fit for these demanding environments.

WEKA: NeuralMesh

Solution Overview
WEKA provides NeuralMesh, a storage solution focused on delivering high performance for AI and other next-generation workloads. NeuralMesh is a containerized, microservices-based architecture designed for hybrid and multicloud environments. This single, unified platform is built to deliver the performance of all-flash arrays with the simplicity of NAS and the scalability of the cloud. The solution is available as subscription software, as a preconfigured appliance (WEKApod), or for deployment in all public and private clouds..

WEKA’s strategy is centered on rapid advancement to meet the demanding requirements of modern data pipelines. As a vendor focused on innovation, its solution will likely look and feel different over the contract lifecycle. The company delivers an aggressive roadmap and values rapid advancement and frequent updates to address the evolving needs of AI and HPC workloads. This is evidenced by the recent architectural transformation to NeuralMesh and a forward-looking roadmap that includes an Augmented Memory Grid to accelerate AI inference and a converged infrastructure offering that runs directly on GPU clusters.

WEKA is positioned as a Leader and Outperformer in the Innovation/Feature Play quadrant of the scale-out storage Radar chart.

Strengths
WEKA scored well on a number of decision criteria, including:

  • Kubernetes support: The solution provides advanced, application-aware data services for containerized environments. Its CSI 2.0 plugin for Kubernetes supports dynamic provisioning and management of persistent volumes, including the ability to clone PVCs from snapshots, which simplifies the deployment and management of stateful applications at scale.

  • Object storage integration: NeuralMesh delivers deep integration with object storage, supporting bidirectional tiering to any S3-compatible target while maintaining a unified namespace for transparent data access. The platform can also function as a high-performance, S3-compatible object store itself, allowing data to be accessed concurrently via file and object protocols.

  • GPUDirect support: NeuralMesh is fully optimized for GPU-centric workloads, offering certified, deep integrations with key AI platforms like NVIDIA DGX SuperPOD. It fully supports GPUDirect Storage (GDS) over both InfiniBand and Ethernet (RoCE), enabling direct data transfer between storage and GPU memory to eliminate CPU bottlenecks and maximize the utilization of expensive accelerator resources.

WEKA was classified as an Outperformer given its rapid rate of development over the last year, including the launch of its transformative NeuralMesh architecture. The vendor maintains a monthly release cadence and has an aggressive roadmap focused on enabling emerging AI use cases, which could result in it leaping forward in the market in the next year.

Opportunities
WEKA has room for improvement in a few decision criteria, including:

  • Data management: While NeuralMesh provides foundational data management tools, it could be improved by developing more advanced native capabilities. The solution currently relies on basic file metadata for classification and directs customers to third-party tools for deeper analytics, data cataloging, and advanced metadata tagging. Building these features natively would provide a more comprehensive and integrated data lifecycle management experience.

  • AI-driven cyber resilience orchestration: The platform’s current ransomware protection is centered on immutable snapshots sent to an object store. However, its auditing and threat detection capabilities are limited. WEKA could improve in this area by incorporating AI/ML for predictive threat analytics and developing automated response orchestration, such as auto-isolation of infected systems and deeper, bidirectional integration with enterprise SIEM/SOAR platforms.

  • Public cloud integration: Although WEKA is available in all major cloud marketplaces and offers automated deployment, it could enhance its hybrid and multicloud management. Developing a truly unified control plane to manage policies, data services, and data mobility seamlessly across a distributed on-premises and multicloud estate would simplify operations and reduce administrative overhead for large enterprises.

Purchase Considerations
WEKA utilizes a flexible subscription-based licensing model based on the usable capacity consumed in the flash tier. While this aligns well with modern consumption patterns, the licensing portfolio is broken into multiple add-on options for features like tiering, data protection, and data reduction, which can add complexity for buyers trying to determine the total cost of ownership. As a Platform Play, the solution is designed to be a comprehensive data platform for greenfield deployments or for the displacement of incumbent systems, and it scores well across most performance-oriented features. Deployment is made easier through automated installers and cloud templates. For large-scale, specialized deployments like NVIDIA DGX SuperPOD, professional services through certified partners are required to ensure optimal integration and performance.

Use Cases
As a Feature Play vendor, WEKA’s NeuralMesh is engineered to support a specific set of the most demanding data-intensive workloads rather than a broad range of general-purpose use cases. Its primary focus is on high-performance environments that require consistent, low-latency performance at scale, including AI model training and inference, high-performance computing (HPC), and large-scale analytics.

The platform excels in industry-specific applications where performance is the critical bottleneck, such as quantitative analysis in financial services, genomic sequencing in life sciences, and high-resolution rendering in media and entertainment.

6.
Analyst’s Outlook

6. Analyst’s Outlook

The scale-out storage market is having an identity crisis, and that's actually a good thing. After years of vendors pitching the same "store more data, faster" story, we're finally seeing real differentiation emerge. The catalyst? AI workloads and ransomware attacks have exposed just how inadequate traditional scale-out architectures have become for modern data challenges.

If you're evaluating scale-out storage today, the traditional metrics of cost per terabyte and IOPS are necessary but no longer sufficient. The market has split into two camps: vendors racing to support AI pipelines, particularly RAG architectures, and those doubling down on cyber resilience with features like isolated recovery vaults. The smart money is on platforms that can do both without compromising either.

Three themes dominate purchase decisions right now. First, AI readiness has moved from a roadmap item to a table stake. But it's not just about raw performance for training; the real bottleneck for enterprise AI is often the retrieval layer for RAG implementations, which requires low-latency access to distributed knowledge bases across file, object, and database systems. Second, ransomware protection has evolved beyond immutable snapshots. Regulators and cyber insurance providers now expect demonstrable recovery capabilities. Think orchestrated recovery that can restore operations in hours, not weeks. Third, the promise of a unified data fabric is finally becoming achievable, driven by necessity rather than vendor hype. Organizations can't afford to manually stitch together data from a dozen different silos to feed their AI initiatives.

For IT decision-makers, your next move depends on your organization's AI maturity. If you're just starting with AI pilots, prioritize platforms with strong object storage integration and flexible protocol support, as you'll need both as you scale. If you're already running production AI workloads, focus on solutions with proven GPUDirect support and sophisticated data pipeline capabilities. Don't overlook the basics, though. Any platform you choose needs robust public cloud integration and automated tiering to manage costs as data volumes explode.

Looking ahead, expect further consolidation as the cost of keeping pace with both AI and security requirements prices out smaller players. The winners will be those who can abstract away the complexity of managing data across wildly different consumption patterns, from massive sequential reads for model training to millions of small random reads for inference. Watch for increased emphasis on energy efficiency as well.; the combined power draw of storage and GPU clusters is becoming a board-level concern.

The key takeaway? The era of general-purpose scale-out storage is ending. Your next storage platform needs to be opinionated about workloads while remaining flexible about deployment. Start your evaluation by mapping your AI roadmap and security requirements first, then work backward to the storage architecture that can support both. The vendors that can credibly enable AI innovation while guaranteeing cyber resilience will define the market for the next five years.

7.
Methodology

7. Methodology

*Vendors marked with an asterisk did not participate in our research process for the Radar report, and their capsules and scoring were compiled via desk research.

For more information about our research process for Radar reports, please visit our Methodology.

8.
About Whit Walters

8. About Whit Walters

My mission is to deliver innovative and scalable solutions that enable data-driven decision making and business transformation. I have extensive knowledge and skills in big data, data warehousing, Apache Airflow, and Google Cloud Platform, where I hold three professional certifications. I enjoy collaborating with clients and partners, sharing best practices, and mentoring the next generation of data and cloud professionals.

9.
About GigaOm

9. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.