Product Description
Workloads
AI & ML/DL Training
Modeling & Simulation
HPC & Super Computing
Targeted workloads and vertical, including Healthcare, Life Sciences, Academia
Unlock insights with purpose-built performance in a highly dense, smart cooled server for AI, removing traditional computational boundaries of real-time insights
Dense Acceleration
Optimized 4-way interconnected Intel GPUs drive demanding ML/DL training and simulation modeling in 2U
Efficiently cooled
Direct liquid cooled CPUs and GPUs maximizes performance, power utilization efficiency and lowers TCO
Flexible I/O
Maximize operations with 4 PCIe Gen5 slots for 1:1 mapping with GPU for continuous utilization
Dual Socket
- Up to two 4th Generation Intel Xeon Scalable processors with up to 56 cores per processor
- Direct Liquid cooled CPUs & GPUs
- 1200mm rack capable
Support for high-speed and memory capacity
- Up to 32 DDR5 DIMMs
- 4800 MT/s(1DPC) or 4400 MT/s (2DPC)
Support for up to 8 Drives
- NVMe U.2 Gen4 or E3.S Gen5 drives
- Hot-Plug BOSS N-1 for boot (optional)
GPU Flexibility & Optimization
- 4x NVIDIA 700W SXM GPUs
-or-
- 4x Intel Data Center GPU Max Series 1550 600W OAM GPUs
- Quad connected NVLink or XeLink capability across GPUs