Proven. Powerful. MLPerf-Validated: DDN AI400X3 Sets the Bar for AI Infrastructure
DDN®, the global leader in AI and data intelligence solutions, has set a new benchmark for performance and efficiency with its next-generation AI400X3 storage appliance—delivering standout results in the latest MLPerf® Storage v2.0 benchmarks. Powered by DDN’s EXAScaler® parallel file system, the AI400X3 is engineered to accelerate today’s most demanding AI workloads at scale, combining exceptional performance density with a compact, energy-efficient design that redefines what’s possible in AI infrastructure.
For large enterprises, this means faster time-to-insight, lower operational costs, and the ability to scale AI initiatives with confidence—without compromise on performance or sustainability.
“AI at scale demands more than brute force—it requires precision-engineered infrastructure that can deliver relentless performance, efficiency, and reliability,” said Sven Oehme, CTO at DDN. “With the AI400X3, we’ve achieved exactly that. These MLPerf results prove that DDN can keep pace with—and even outpace—the world’s most advanced GPUs, all within a compact, power-efficient footprint. We're not just enabling AI—we're removing the bottlenecks that have held it back.”
The MLPerf Storage benchmark tests how well a storage system can keep up with demanding AI workloads—and DDN delivered across the board. The AI400X3 was evaluated in both single-node and multi-node categories, representing real-world scenarios from early-stage deployments to full-scale, distributed AI training. Impressively, in both cases, DDN achieved these results using a single, compact 2U appliance—demonstrating how powerful and efficient modern AI infrastructure can be.
DDN’s AI400X3 didn’t just perform well—it redefined what’s possible in AI infrastructure. In both single-node and multi-node categories, this compact 2U system delivered fantastic performance, saturating hundreds of simulated H100 GPUs across diverse AI workloads. For teams just getting started, the single-node results highlight how easy it is to unlock powerful performance with minimal infrastructure. For organizations training at scale, the multi-node benchmarks showcase what real-world AI practitioners care about most: consistent, blistering throughput under massive GPU loads. MLPerf’s Storage 2025 benchmark is the industry’s gold standard for measuring how much AI performance you can squeeze from a single storage appliance—and the AI400X3 set a new bar.
Highlights from the MLPerf Storage 2025 submission:
In single-node benchmarking, the DDN AI400X3 achieved:
- The highest performance density on Cosmoflow and Resnet50 training, serving 52 and 208 Simulated H100 GPUs with only a 2u 2400watt appliance.
- IO performance of 30.6GB/s reads and 15.3 GB/s Writes resulting in load and save times of Llama3-8b checkpoints of only 3.4 and 7.7 seconds respectively
And in multi-node benchmarking, it achieved:
- 120+ GB/s sustained read throughput for Unet3D H100 training
- Support for up to 640 simulated H100 GPUs on ResNet50
- Up to 135 Simulated H100 GPUs on Cosmoflow with the new AI400X3, a 2x improvement over last years results.
These benchmark results highlight the DDN AI400X3’s ability to deliver consistently high performance across a wide spectrum of AI workloads—even under intensive, multi-node training demands. By ensuring GPUs remain fully utilized with fast, reliable data access, the AI400X3 accelerates model training while enabling frequent checkpointing without compromising performance. The result is improved training efficiency, greater resilience, and reduced overall infrastructure costs.
With a compact 2U form factor and low power consumption, the AI400X3 is purpose-built to address growing datacenter constraints around space, power, and cooling—making it an ideal choice for organizations scaling AI workloads sustainably.
DDN has long been recognized as a leader in high-performance AI and HPC infrastructure. Since 2016, NVIDIA has relied exclusively on DDN to power its internal AI clusters, underscoring the company’s role as a trusted partner in driving scalable, real-world AI innovation.
DDN isn’t just helping organizations scale—they’re powering the AI revolution from the data layer up. Whether it’s decoding the human genome, accelerating breakthroughs in medical imaging, or training tomorrow’s most complex vision models, DDN’s AI400X3 is engineered to crush bottlenecks and keep GPUs fed at full speed. This isn’t storage as usual—it’s purpose-built, performance-obsessed infrastructure that turns data into real-time intelligence.
And by going head-to-head in the industry’s toughest benchmarking gauntlet—MLPerf Storage—DDN is putting facts behind the hype. These results don’t just validate our leadership, they give enterprises and AI innovators the trusted data they need to build faster, train smarter, and deploy with confidence.
To get started with DDN today, please visit ddn.com.
About DDN
DDN is the world’s leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN’s proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns.
Follow DDN: LinkedIn, X, and YouTube.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250804607363/en/
Contacts
Media Contact:
Amanda Lee, VP, Marketing – Analyst & Public Relations at DDN
Amlee@ddn.com