太平洋品牌网
太平洋品牌网 文传商讯

KAYTUS Launches All-QLC Flash Storage at AI EXPO 2026 for 10,000-GPU Clusters

互联网 发布时间:2026-05-08 16:19:56

KAYTUS’s next-generation all-QLC flash solution delivers fully linear performance scaling for massive GPU clusters, while reducing TCO by 70%, enabling ultra-large-scale computing for the era of agentic AI.

SINGAPORE--(BUSINESS WIRE)--At AI EXPO KOREA 2026, KAYTUS officially launched its All-QLC Flash Storage Solution, engineered to deliver high performance, massive scalability, and cost efficiency for 10,000-GPU clusters. The solution addresses data-delivery bottlenecks in ultra-large-scale AI training, helping maximize GPU resource utilization.

Based on the KR2280 and KR1180 server platforms, the solution is deeply integrated with industry-leading AI-native parallel file systems to eliminate data silos inherent in traditional tiered storage. Purpose-built for read-intensive AI workloads, it overcomes the horizontal scaling limitations of massive clusters. Verified test-data shows that, at exabyte-scale deployment, the solution delivers 10 TB/s aggregate bandwidth and 100 million IOPS. In addition, it reduces five-year TCO by 70% compared with traditional TLC-based solutions, accelerating model innovation for AI cloud providers and intelligent computing centers.

Limitations in Traditional AI Storage Architectures.

The explosive growth of AI is fundamentally transforming enterprise computing and storage requirements. Large-scale AI model training features highly read-intensive workloads that require tens of thousands of GPUs to concurrently access exabyte-scale datasets with sub-millisecond latency. Traditional storage architectures now face three major challenges:

  • Separated Data Silos: Traditional ETL processes require data to be moved from object storage to parallel file systems before training, resulting in time-consuming physical data migration. IDC research indicates that data teams spend 81% of their time on data preparation, slowing business iteration.
  • Workload and Media Mismatch: More than 90% of AI training involves high-frequency concurrent reads. In contrast, traditional TLC flash solutions provide excessive write endurance that is unnecessary for these read-intensive workloads, driving up procurement, space, and power costs for exabyte-scale clusters and resulting in inefficient resource utilization.
  • Scalability Bottlenecks: Traditional file systems were not designed to handle the I/O burst workloads generated by 10,000-GPU clusters. As clusters scale, metadata lock contention and communication overhead introduce latency spikes and degraded overall performance.

KAYTUS Solution: All-QLC Flash Storage for Delivering High Performance, Scalability, and Cost Efficiency.

The next-generation KAYTUS All- QLC Flash Storage Server Solution is purpose-built to unlock the full potential of read-intensive AI training workloads. By tightly integrating flagship compute nodes with industry-leading AI-native parallel file systems, the solution harnesses advanced hardware–software co-design to deliver breakthrough performance, seamless scalability, and superior cost efficiency for ultra-large-scale AI computing environments.

Architectural Innovation: Overcoming AI Training Efficiency Bottlenecks.

The KAYTUS solution establishes a unified namespace with native multi-protocol access across file, object, and block storage. By leveraging high-capacity QLC flash pools and NVMe-oF fully shared interconnects, it redefines the unified data plane for AI storage, effectively eliminating the data silos inherent in traditional tiered architectures. Data can now flow on demand to GPU nodes without cross-system migration, enabling sub-millisecond access, and significantly improving AI training data retrieval efficiency.

  • Hardware Optimization: Engineered for read-intensive workloads, the solution features a PCIe 5.0 direct-connect architecture that doubles single-node I/O bandwidth compared to the previous generation. Combined with NUMA-balanced optimization, it effectively eliminates internal throughput bottlenecks.
  • Software Synergy: The solution integrates NFS over RDMA and native GPU Direct Storage technology, enabling direct data paths from QLC flash to GPU memory. By leveraging a disaggregated architecture that decouples protocol processing from storage states, it eliminates east-west traffic and achieves fully linear scaling of bandwidth and throughput, from petabyte to exabyte scale.

10,000-GPU Cluster Benchmarks: Exceptional Performance, Scalability, and Cost Efficiency

In benchmark testing in an exabyte-scale storage environment for a 10,000-GPU data center, the solution—powered by KR2280 and KR1180 nodes and optimized with industry-leading AI-native parallel file systems—demonstrated its capability to scale seamlessly to support computing clusters of up to 10,000 GPUs.

  • Extreme Performance at Scale: The system delivers 10 TB/s sustained aggregate read bandwidth and 100 million random-read IOPS, enabling concurrent access for tens of thousands of GPUs. Performance scales linearly as additional nodes are added, while GPU utilization remains consistently above 95%, with no storage-side lock contention or queuing, effectively eliminating GPU data starvation.
  • Superior Cost Efficiency: Compared with traditional TLC all-flash solutions, the solution reduces five-year TCO by 70%, cuts power and cooling costs by more than 75%, helping enterprises avoid overpaying for unnecessary extra write endurance.
 

点击查看全部

热门阅读

最新文章