Tesla GPU basierte Server Servers

GPU Supercomputing Server bietet massive Rechenleistung und HPC Performance und beschleunigt deutlich Ihre Anwendungen.

NVIDIA Tesla ist der weltweit führende Plattformanbieter für beschleunigende Rechnenzentren. Der Schlüssel zu dieser Plattform ist der äußerst parallel GPU-Beschleuniger, der Ihnen extrem verbesserte Datendurchsatzgeschwindigkeit für rechenintensive Aufgaben ohne steigende Kosten und mit geringer Stellfläche bietet.


NVIDIA Tesla Elite Partner 25% Discount.

Drive Bay Size
CyberServe Xeon SP1-104S G4 GPU

Supports 1x double slot GPU card, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 4 x 3.5" NVMe/SATA hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
4
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
8x 4800MHz
GPU Slots:
1x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
512GB
Konfigurieren Ab: €3,313
Konfigurieren
CyberServe Xeon SP1-110S G3

Ideal for virtualisation, cloud computing, enterprise server. 2x PCI-E 4.0 x16 slots. Intel® Ethernet Controller X550 2x 10GbE RJ45. Redundant power supplies.

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
10
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
8x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Konfigurieren Ab: €3,370
Konfigurieren
CyberServe Xeon SP1-102N G3

Edge Server – 1U 3rd Gen. Intel Xeon Scalable GPU server system, ideal for AI & Edge applications.

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
2
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
1x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
Full Height/Length Expansion, Redundant Power Supply - Standard, Short Depth, Front I/O Ports
Max RAM Capacity:
2TB
Konfigurieren Ab: €3,497
Konfigurieren
CyberServe Xeon SP1-110S NVMe G4 GPU

Supports 1x double slot GPU card, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 10 x 2.5" NVMe/SATA hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
10
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
8x 4800MHz
GPU Slots:
1x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
512GB
Konfigurieren Ab: €3,521
Konfigurieren
CyberServe Xeon SP1-208S NVMe G4 GPU

Supports 2x double slot GPU cards, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 8 x 3.5" NVMe/SATA hot-swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
8x 4800MHz
GPU Slots:
2x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
512GB
Konfigurieren Ab: €3,608
Konfigurieren
CyberServe Xeon SP1-112 G4

4th Gen Intel Xeon Scalable processor, single 1Gb/s LAN port, redundant power supply, 12 x 2.5" SATA/SAS hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Redundant Power Supply - Standard, Full Height/Half Length Expansion
Max RAM Capacity:
1TB
Konfigurieren Ab: €3,647
Konfigurieren
CyberServe Xeon SP1-112 NVMe G4

4th Gen Intel Xeon Scalable processor, single 1Gb/s LAN port, redundant power supply, 12 x 2.5" NVMe/SATA hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Redundant Power Supply - Standard, Full Height/Half Length Expansion
Max RAM Capacity:
1TB
Konfigurieren Ab: €3,898
Konfigurieren
CyberServe Xeon SP1-212 G4

4th Gen Intel Xeon Scalable processor, single 1Gb/s LAN port, redundant power supply, 12 x 3.5"/2.5" SATA/SAS hot-swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Redundant Power Supply - Standard, Full Height/Half Length Expansion
Max RAM Capacity:
1TB
Konfigurieren Ab: €3,974
Konfigurieren
CyberServe Xeon SP1-202 G4 GPU

4th Gen Intel Xeon Scalable processor, single 1Gb/s LAN port, redundant power supply, 2 x 2.5" NVMe/SATA hot-swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
2
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Short Depth
Max RAM Capacity:
1TB
Konfigurieren Ab: €4,020
Konfigurieren
CyberServe Xeon SP2 112T-12NVME G4

Dual 4th Gen Intel Xeon Scalable processor, dual 10Gb/s LAN port, redundant power supply, 12 x 2.5" NVMe/SATA/SAS hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
32x 4800MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Redundant Power Supply - Standard, Full Height/Half Length Expansion
Max RAM Capacity:
GB
Konfigurieren Ab: €4,372
Konfigurieren
CyberServe Xeon SP1-212 G4 GPU

Supports up to 3 x double slot Gen5 GPU cards, single 1Gb/s LAN port, redundant power supply, 12 x 3.5/2.5" SATA/SAS hot-swappable bays, 4th Gen Intel Xeon Scalable processor

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Slots:
3x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Konfigurieren Ab: €4,546
Konfigurieren
3.5" Drives 
CyberServe EPYC EP1-G242-Z11

Up to 4 x NVIDIA ® PCIe Gen4 GPU cards. NVIDIA-Certified system for scalability, functionality, security, and performance. Dedicated management port. Redundant power.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
4
Drive Interface:
SATA
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
8x 3200MHz
GPU Slots:
4x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Full Height/Length Expansion
Max RAM Capacity:
1TB
Konfigurieren Ab: €4,838
Konfigurieren
CyberServe Xeon SP2-G291-281 GPU Server

High Performance Computing Server - Dual Intel Xeon Scalable Processor Series, 2U Server, 8x GPU Cards

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Drive Interface:
SATA , 12Gb/s SAS
Server Processor:
Xeon Scalable Processor Gen 2
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Redundant Power Supply - Optional
Max RAM Capacity:
GB
Konfigurieren Ab: €4,912
Konfigurieren
CyberServe Xeon 7049GP-TRT GPU Server

GPU Computing Pedestal Supercomputer, 4x Tesla or GTX-Titan GPU Cards

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Drive Interface:
SATA , 12Gb/s SAS
Server Processor:
Xeon Scalable Processor Gen 2
GPU Support:
Tesla GPU Optimised
Features:
Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Optional
Max RAM Capacity:
GB
Konfigurieren Ab: €4,936
Konfigurieren
CyberServe Xeon SP2-ESC4000 G4 GPU Server

GPU Computing 2U Supercomputer, 4x Tesla, AMD or GTX-Titan GPU Cards

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Drive Interface:
SATA , 12Gb/s SAS
Server Processor:
Xeon Scalable Processor Gen 2
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Optional
Max RAM Capacity:
GB
Konfigurieren Ab: €4,994
Konfigurieren
2.5" Drives 
CyberServe Xeon SP2-1029GQ-TNRT GPU Server

Ultra High-Density GPU Computing 1U Supercomputer, 4x Tesla or GTX-Titan GPU Cards - 20,000 CUDA Cores

Form Factor:
1U
Drive Bays:
Fixed Drives
HDD Size:
2.5" Drives
Drive Interface:
SATA
Server Processor:
Xeon Scalable Processor Gen 2
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Optional
Max RAM Capacity:
GB
Konfigurieren Ab: €5,149
Konfigurieren
Short Depth 2.5" Drives NVMe Drives 
CyberServe EPYC EP1 202-NVMe-G G4

Short Depth Single AMD EPYC 9004 Series Edge Server with 2x GPU Slots, 2x 2.5" Gen4 NVMe/ SATA Hot-Swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
2
Drive Interface:
SATA , NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
12x 4800MHz
GPU Slots:
2x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Full Height/Length Expansion, Redundant Power Supply - Standard, Short Depth
Max RAM Capacity:
768GB
Konfigurieren Ab: €5,266
Konfigurieren
3.5" Drives NVMe Drives 
CyberServe Xeon SP2-212NS G3

Supports 3x double slot GPU cards, dual 1Gb/s LAN ports, 5x PCIe Gen4 x16 slots, redundant power supply.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
3x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €5,403
Konfigurieren
CyberServe EPYC EP1 212-8NVMe G4

Single AMD EPYC 9004 Series- 12x 3.5" NVMe / SATA Drives

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
12x 4800MHz
GPU Slots:
2x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
768GB
Konfigurieren Ab: €5,694
Konfigurieren
2.5" Drives NVMe Drives 
CyberServe EPYC EP1-G292-Z20 GPU Server

8x PCIe Gen4 expansion slots for GPUs, 2 x 10Gb/s SFP+ LAN ports (Mellanox® ConnectX-4 Lx controller), 2 x M.2 with PCIe Gen3 x4/x2 interface

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
8x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Konfigurieren Ab: €6,271
Konfigurieren
CyberServe Xeon SP2-408S NVMe G4 GPU

Dual 4th Gen Intel Xeon Scalable Gen4 Processor, GPU Computing Pedestal Supercomputer Server, 4x Tesla, RTX GPU Cards

Form Factor:
Pedestal
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Slots:
4x Double Width GPU / 8x Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Konfigurieren Ab: €6,280
Konfigurieren
2.5" Drives NVMe Drives 
CyberServe Xeon SP2-208-2S-SFF-GPU G3

2U GPU server powered by dual-socket 3rd Gen Intel Xeon Scalable processors that supports up to 16 DIMM, four dual-slot GPU, 2 M.2, four NVMe (by SKU), total eleven PCIe 4.0 slots

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Konfigurieren Ab: €6,408
Konfigurieren
Ultra High-Performance 2.5" Drives NVMe Drives 10Gb Lan 
CyberServe SP2-104-2S-GPU G3

GPU server optimised for HPC, Scientific Virtualisation and AI. Powered by 3rd Gen Intel Xeon Scalable processors. 6x PCIe Gen 4.0 x16, 1x M.2

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
4
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Konfigurieren Ab: €7,111
Konfigurieren
Rackmount or Tower 3.5" Drives NVMe Drives 10Gb Lan 
Cyberserve Xeon SP2-408-4S GPU G3

Ideal for scientific virtualisation and HPC. 6x PCI-E 4.0 x16 slots. 2x M.2 NVMe or SATA supported. Redundant power supplies.

Form Factor:
Pedestal
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double Width GPU / 8x Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Konfigurieren Ab: €7,268
Konfigurieren
Ultra High-Performance 3.5" Drives 2.5" Drives NVMe Drives 
CyberServe EPYC EP2-G292-Z42 GPU Server

8x PCIe Gen3 expansion slots for GPUs, 2x 10Gb/s BASE-T LAN ports (Intel® X550-AT2 controller), 4x NVMe and 4x SATA/SAS 2.5" hot-swappable HDD/SSD bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
16x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Konfigurieren Ab: €7,869
Konfigurieren
3.5" Drives 
CyberServe Xeon SP2-208-2S-GPU G3

2U dual-socket GPU server powered 3rd Gen Intel Xeon Scalable processors that supports up to 16 DIMM, four dual-slot GPU, 4 M.2, eight NVMe (by SKU), total eleven PCIe 4.0 slots.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double Width GPU / 8x Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Konfigurieren Ab: €8,142
Konfigurieren
Ultra High-Performance 2.5" Drives NVMe Drives 
CyberServe SP2-G292-280 G3

GPU Server - 2U 8 x GPU Server | Application: AI , AI Training , AI Inference , Visual Computing & HPC. Dual 10Gb/s BASE-T LAN ports.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
24x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
3.1TB
Konfigurieren Ab: €8,797
Konfigurieren
2.5" Drives NVMe Drives 
CyberServe EPYC EP2 224 NVMe-G G4

Dual AMD EPYC 9004 Series HPC Server - 24x 2.5" NVMe / SATA Drives

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
28
Drive Interface:
SATA , NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
24x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Redundant Power Supply - Standard
Max RAM Capacity:
1.5TB
Konfigurieren Ab: €9,058
Konfigurieren
2.5" Drives NVMe Drives 
CyberServe EPYC EP2-G482-Z51 GPU Server

Up to 8 x PCIe Gen4 GPGPU cards, 2 x 10Gb/s BASE-T LAN ports (Intel® X550-AT2), 8-Channel RDIMM/LRDIMM DDR4 per processor, 32 x DIMMs

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
10
Drive Interface:
SATA
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €9,868
Konfigurieren
Ultra High-Performance 3.5" Drives 10Gb Lan 
CyberServe Xeon SP2-412G-GPU G3

Up to 8x PCIe Gen4 GPGPU cards, dual 10Gb/s LAN ports, redundant power option.

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €9,943
Konfigurieren
CyberServe EPYC EP2-4124GS-TNR GPU Server

8 PCI-E 4.0 x16 + 3 PCI-E 4.0 x8 slots, Up to 24 Hot-swap 2.5" drive bays, 2 GbE LAN ports (rear)

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €9,951
Konfigurieren
Ultra High-Performance 
CyberServe Xeon SP2-ESC8000 G4 GPU Server

8x PCIE x16, Redundant 2400W Power, Dual Gigabit

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Drive Interface:
SATA , 12Gb/s SAS
Server Processor:
Xeon Scalable Processor Gen 2
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Optional
Max RAM Capacity:
GB
Konfigurieren Ab: €10,288
Konfigurieren
CyberServe Xeon SP2-208-4G NVMe G4 GPU

Supports up to 8 x double slot Gen4 GPU cards, dual 10Gb/s BASE-T LAN ports, redundant power supply, 8 x 2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , NVMe
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
24x 4800MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Konfigurieren Ab: €11,496
Konfigurieren
Ultra High-Performance 3.5" Drives NVMe Drives 10Gb Lan 
CyberServe Xeon SP2 412-8G GPU G3

Up to 10x PCIe Gen4 GPGPU cards, dual 10Gb/s BASE-T LAN, redundant power supply.

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €11,544
Konfigurieren
Ultra High-Performance 3.5" Drives NVMe Drives 
CyberServe EPYC EP2-G482-Z50 GPU Server

10 x FHFL Gen3 expansion slots for GPU cards, 2 x 10Gb/s BASE-T LAN ports (Intel® X550-AT2), 8 x 2.5" NVMe, 2 x SATA/SAS 2.5" hot-swappable HDD/SSD bays, 12 x 3.5" SATA/SAS hot-swappable HDD/SSD bays

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
22
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €11,644
Konfigurieren
CyberServe EPYC EP2 424-4NVMe-G GPU Server G4

Dual AMD EPYC 9004 Series 8x GPU Server - 24x 2.5" NVMe / SATA / SAS + 4x NVMe Dedicated Drives

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
24x 4800MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1.5TB
Konfigurieren Ab: €12,032
Konfigurieren
CyberServe Xeon SP2-6049GP-TRT GPU Server

20x PCI-E 3.0 x16 supports up to 20x single width GPU, 24x hot-swap 3.5" drives, 2x 10GBase-T LAN port

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
Xeon Scalable Processor Gen 2
GPU Support:
Tesla GPU Optimised
Max RAM Capacity:
GB
Konfigurieren Ab: €14,025
Konfigurieren
CyberServe Xeon SP2-412T G3 GPU

Supports 10x double slot GPU cards, redundant power supply, 12 x 3.5/2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
GB
Konfigurieren Ab: €14,480
Konfigurieren
CyberServe Xeon SP2-412 NVMe G4 GPU

Supports 10x double slot GPU cards, dual 10Gb/s BASE-T LAN ports, redundant power supply, 12 x 3.5/2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
32x 4800MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
2TB
Konfigurieren Ab: €24,818
Konfigurieren
CyberServe Xeon SP2-824 NVMe G4 GPU

Supports 8x HGX H100 GPUs, dual 10Gb/s BASE-T LAN ports, redundant power supply, 16 x 2.5" NVMe, 8x SATA hot-swappable bays. Built for AI Training and Inferencing.

Form Factor:
8U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
32x 4800MHz
GPU Slots:
8x SXM GPU
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
GB
Konfigurieren Ab: €29,139
Konfigurieren
CyberServe EPYC EP2-824 NVMe G4 GPU Server

Supports 8x HGX H100 GPUs, Dual AMD EPYC 9004 Series 8x GPU Server - 16x 2.5" NVMe + 8x SATA Drives Hot-Swappable bays. Built for AI Training and Inferencing.

Form Factor:
8U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
24x 4800MHz
GPU Slots:
8x SXM GPU
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
GB
Konfigurieren Ab: €30,638
Konfigurieren
Promotion - Test Drive This Server 
CyberServe EPYC EP2-2124GQ-NART GPU Server

High Density 2U System with NVIDIA® HGX™ A100 4-GPU, Direct connect PCI-E Gen4 Platform with NVIDIA® NVLink™, IPMI 2.0 + KVM with dedicated 10G LAN

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
4
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €72,997
Konfigurieren
Promotion - Test Drive This Server 
CyberServe EPYC EP2-4124GO-NART GPU Server

8x NVIDIA A100 Gen4, 6x NVLink Switch Fabric, 2x M.2 on board and 4 Hybrid SATA/Nvme, 8x PCIe x16 Gen4 Slots

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
6
Drive Interface:
NVMe, M.2
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Konfigurieren Ab: €148,867
Konfigurieren
CyberServe Xeon SP2-308 NVMe G4 GPU

Supports 4x SXM5 GPU Modules, dual 10Gb/s BASE-T LAN ports, redundant power supply, 8 x 2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
3U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , NVMe
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Slots:
4x SXM GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
1TB
Konfigurieren Ab: €149,908
Konfigurieren

Huge Educational and Research Discount

NVIDIA P40
NVIDIA P100 PCIe
NVIDIA V100S
NVIDIA Titan RTX
NVIDIA T4
NVIDIA A100
Architecture Pascal Pascal Volta Turing Turing Ampere
SMs 30 56 80 72 72 108
CUDA Cores 3,840 3,584 5,120 4608 2,560 6,912
Tensor Cores N/A N/A 640 576 320 432
Frequency 1,303 MHz 1,126 MHz 1,267 MHz 1,350 MHz 1,590 MHz -
TFLOPs (double) 367.4 GFLOPS(1:32) 4.7 8.2 - 65 9.7
TFLOPs (single) 12 9.3 16.4 16.3 8.1 19.5
TFLOPs (half/Tensor) 183.7 GFLOPS(1:64) 18.7 130 130 65.13 TFLOPS(8:1) 624
Cache 3 MB L2 4 MB L2 6 MB - 4 MB 40 MB
Max. Memory 24 GB 16 GB 32 GB 24 GB 16 GB 40 GB
Memory B/W 346 GB/s 720 GB/s 1134 GB/s 672 GB/s 350 GB/s 1,555 GB/s

The NVIDIA Tesla P40 GPU accelerator works with NVIDIA Quadro vDWS software and is the first system to combine an enterprise-grade visual computing platform for simulation, HPC rendering, and design with virtual applications, desktops, and workstations. This gives organisations the freedom to virtualise both complex visualisation and compute (CUDA and OpenCL) workloads.

NVIDIA Tesla P40

The NVIDIA Tesla P40 taps into the industry-leading NVIDIA Pascal architecture to deliver up to twice the professional graphics performance of the NVIDIA Tesla M60. With 24 GB of framebuffer and 24 NVENC encoder sessions, it supports 24 virtual desktops (1 GB profile) or 12 virtual workstations (2 GB profile ), providing the best end-user scalability per GPU. This powerful GPU also supports eight different user profiles, so virtual GPU resources can be efficiently provisioned to meet the needs of the user. They are also available in a wide variety of industry-standard 2U servers.

With NVIDIA virtual GPU software and the NVIDIA Tesla P40, organisations can now virtualise high-end applications with large, complex datasets for rendering and simulations, as well as virtualising modern business applications. Resource allocation ensures that users have the right GPU acceleration for the task at hand. NVIDIA software shares the power of Tesla P40 GPUs across multiple virtual workstations, desktops, and apps. This means you can deliver an immersive user experience for everyone from office workers to mobile professionals to designers through virtual workspaces with improved management, security, and productivity.

Exceptional User Experience

Get the ultimate user experience for any workload or vGPU profile. NVIDIA Quadro vDWS software with Tesla P40 GPU supports compute workloads (CUDA AND OpenCL) for every vGPU, enabling professional and design engineering workflows at peak performance. The Tesla P40 delivers up to 2x the graphics performance of the M60. Users can count on consistent performance with the new resource scheduler, which provides deterministic QoS AND eliminates the problem of a "noisy neighbor."

Optimal Management and Monitoring

Management tools give you vGPU visibility into the host or guest level, with application level monitoring capabilities. This lets IT teams intelligently design, manage, and support their end user's experience. End-to-end management and monitoring also deliver real-time insight into GPU performance. Integration with VMware vRealise Operations (vROps), Citrix Director and XenCenter gives you flexibility and control.

Flexible GPU Infrastructure

Support up to 50% more users per Pascal GPU relative to a single Maxwell GPU, for scaling high performance virtual graphics and compute. More granular user profiles give you more precise provisioning of vGPU resources, and larger profile sizes - up to 3X larger GPU framebuffer than the M60 - for supporting your most demanding users. The P40 provides flexibility to your system and helps you drive down overall TCO.


NVIDIA Tesla P100 GPU accelerators are the world's first AI supercomputing data centre GPUs. They tap into NVIDIA Pascal GPU architecture to deliver a unified platform for accelerating both HPC and AI. With higher performance and fewer (but signficantly faster) nodes, Tesla P100 enables data centres to dramatically increase throughput while also saving money.

NVIDIA Tesla P100

With over 500 HPC applications accelerated - including 15 out of top 15 - as well as all deep learning frameworks, every HPC customer can deploy accelerators in their data centres.

Tesla P100 for PCIe enables mixed-workload HPC data centres to realise a dramatic jump in throughput while saving money. A single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Being able to complete tasks with far fewer nodes ensures customers can save up to 70% in overall data centre costs.

The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest compute node.

Exponential Performance Leap with Pascal Architecture

The NVIDIA Pascal architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads. With more than 21 teraflops of FP16 performance, Pascal is optimised to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 teraflops of double and single precision performance for HPC workloads.

Unprecedented Efficiency with CoWoS with HBM2

The Tesla P100 tightly integrates compute and data on the same package by adding CoWoS (Chip-on-Wafer-on-Substrate) with HBM2 technology to deliver 3x memory performance over the NVIDIA Maxwell architecture. This delivers a generational leap in time-to-solution for data-intensive applications.

Applications at Massive Scale with NVIDIA NVLink

The revolutionary NVIDIA NVLink high-speed bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5x higher performance compared to today's best-in-class technology.

Simpler Programming with Page Migration Engine

Page Migration Engine frees up developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU's physical memory size to virtually limitless amount of memory.


The fastest and highest performance PC graphics card created, the NVIDIA Titan RTX is powered by Turing architecture and delivers 130 Tensor TFLOPs of performance, 576 tensor cores and 24GB of super-fast GDDR6 memory to your PC. The Titan RTX powers machine learning, AI and creative workflows.

NVIDIA Titan RTX
Ceiling-Shattering Speed

It is hard to find a better option for dealing with computationally intense workloads than the Titan RTX. Created to dominate in even the most demanding of situations, it brings ultimate speed to your data centre. The Titan RTX is built on NVIDIA's Turing GPU Architecture. It includes the very latest Tensor Core and RT Core technology and is also supported by NVIDIA drivers and SDKs. This enables you to work faster and leads to improved results.

Perfect for AI Development

AI models can be trained significantly faster with 576 NVIDIA Turing mixed-precision Tensor Cores providing 130 TLOPS of AI performance. This card works well with all the best-known deep learning frameworks, is compatible with NVIDIA GPU Cloud and is supported by NVIDIA's CUDA-X AI SDK.

Application Acceleration

It allows for application acceleration, working significantly faster with 4609 NVIDIA Turing CUDA cores accelerating end-to-end data science workflows. With 24 GB GDD44 memory you can process gargantuan sets of data.

Revolutionary HPC Performance

The Titan RTX reaches a level of performance far beyond its predecessors. Built with multi-precision Turing Tensor Cores, Titan RTX provides breakthrough performance from FP32, FP16, INT8 and INT4, making quicker training and inferencing of neural networks possible.


NVIDIA Tesla T4 GPUs power the planets most reliable mainstream servers. They can fit easily into standard data centre infrastructures. Designed into a low-profile, 70-watt package, T4 is powered by NVIDIA Turing Tensor Cores, supplying innovative multi-precision performance to accelerate a vast range of modern applications.

NVIDIA Tesla T4
Dominate the Future

It is almost certain that we are heading towards a future where each of your customer interactions, every one of your products and services will be influenced and enhanced by Artificial Intelligence. AI is going to become the driving force behind all future business, and whoever adapts first to this change is going to hold the key to business success in the long term.

The NVIDIA T4 GPU allows you to cost-effectively scale artificial intelligence-based services. It accelerates diverse cloud workloads, including high-performance computing, data analytics, deep learning training and inference, graphics and machine learning. T4 features multi-precision Turing Tensor Cores and new RT Cores. It is based on NVIDIA Turing architecture and comes in a very energy efficient small PCIe form factor. T4 delivers ground-breaking performance at scale.

Breakthrough Performance

T4 harnesses revolutionary Turing Tensor Core technology featuring multi-precision computing to deal with diverse workloads. The T4 is capable of reaching blazing fast speeds.

State-Of the Art Inference in Real-Time

User engagement will be a vital component of successful AI implementation, with responsiveness being one of the main keys. This will be especially apparent in services such as visual search, conversational AI and recommended systems. Over time as models continue to advance and increase in complexity, ever growing compute capability will be required. T4 provides up to massively improved throughput, allowing for more requests to be served in real time.

Video Transcoding Performance

The medium of online video is quite possibly the number one way of delivering information in the modern age. As we move forward into the future, the volume of online videos will only continue to grow exponentially. Simultaneously, the demand for answers to how to efficiently search and gain insights from video continues to grow.

T4 provides ground-breaking performance for AI video applications, featuring dedicated hardware transcoding engines which deliver 2x the decoding performance possible with previous-generation GPUs. T4 is able to decode up to nearly 40 full high definition video streams, making it simple to integrate scalable deep learning into video pipelines to provide inventive, smart video services.


With 32 GB HBM2 memory and powered by the newest GPU architecture NVIDIA Volta, the NVIDIA Tesla V100S delivers the performance of up to 100 CPUs within a single GPU. Allowing data engineers, researchers and scientists to undertake challenges once believed to be impossible.

NVIDIA Tesla V100S

The NVIDIA Tesla V100S is the most advanced breakthrough data centre GPU ever created to accelerate AI, Graphics and HPC. Tesla V100S is the crown jewel of the Tesla data centre computing platform for deep learning, graphics and HPC. Over 450 HPC applications and every major deep learning framework can be accelerated by the Tesla platform. They are available everywhere from desktops to servers to cloud services, providing humungous performance gains and cost saving opportunities.

The previous Tesla V100 has had been hailed as the most advanced data centre graphics card, with this new GPU taking things up a notch. Designed for AI acceleration, high performance computing, graphics and data science, the Nvidia Tesla V100S is a real game changer.

Improvements Over the V100

The Tesla V100S is an upgrade over the Tesla V100, with its level of performance going above and beyond what was possible with the V100.

The main difference between the two is in the memory capacities available. The NVIDIA Tesla V100S only has a 32 GB HBM2 version and boasts higher boost clock speeds (1601MHz) and memory bandwidth (1134 GBps).

With this enhanced clock speed, the V100S can deliver up to 17.1% higher single and double-precision performance, with 16.4TFLOPs and 8.2TFLOPs respectively in comparison to the original V100. Tensor performance has also been enhanced by 16.1%, now reaching 130TFLOPs.


The NVIDIA A100 GPU provides unmatched acceleration at every scale for data analytics, AI and high-performance computing to attack the very toughest computing challenges. An A100 can efficiently and effectively scale to thousands of GPUs. With NVIDIA Multi-Instance GPU (MIG) technology, it can be partitioned into 7 GPU instances, accelerating workloads of every size.

NVIDIA Tesla A100
High Performance Computing

The NVIDIA A100 introduces double-precision Tensor Cores, delivering the biggest milestone since double-precision computing was introduced in GPUs. The speed boost this offers can be immense, with a 10-hour double precision simulation running on NVIDIA V100 Tensor Core GPUs being cut down to only 4 hours when run on A100s. High performance applications are also able to leverage TF32 precision in A100s Tensor Cores to reach up to a 10x increased throughput for single-precision dense matrix multiply operations.

High Performance Data Analytics

In modern data centres it is vital to be able to visualise, analysis and transform huge datasets into insights. However, scale-out solutions quite often end up being bogged down as datasets end up spread across many servers. Servers powered by the A100 deliver the necessary compute power, as well as 1.6TB/sec of memory bandwidth and huge scalability.

Enterprise-Ready Utilisation

The NVIDIA A100 with MIG maximises GPU-accelerated infrastructure utilisation in a way never seen before. With MIG, an A100 GPU can be partitioned into up to 7 independent instances. This can give a multitude of users access to GPU acceleration for their applications and projects.


Broadberry GPU Servers harness the processing power of NVIDIA Tesla graphics processing units for millions of applications such as image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.

As computing evolves, and processing moves from the CPU to co-processing between the CPU and GPU's, NVIDIA invented the CUDA parallel computing architecture to harness the performance benefits.

Speak to Broadberry GPU computing experts to find out more.


Accelerating scientific discovery, visualising big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks. These workloads also require accelerating data centres to meet the growing demand for exponential computing.

NVIDIA Tesla is the world's leading platform for accelerated data centres, deployed by some of the world's largest supercomputing centres and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools and applications to enable faster scientific discoveries and big data insights.

At the heart of the NVIDIA Tesla platform are the massively parallel PU accelerators that provide dramatically higher throughput for compute-intensive workloads - without increasing the power budget and physical footprint of data centres.


Traditionally servers are configured to use a CPU for processing - components which are built to handle a wide range of computing requirements and work perfectly for traditional applications such as email servers and storage servers. There are however a growing number of applications which benefit enormously from using a graphics card for processing.
A GPU server is a server configured with graphics cards which are built to harness the raw processing power of GPUs. Through utilising an offloading process, the CPU is able to send certain tasks to the GPUs and therefore greatly increasing server performance.

GPUs are designed to deal with anything thrown at them, thriving in the most computationally intense applications.
GPU dedicated servers are often used for fast 3D processing, error-free number crunching and accurate floating-point arithmetic where the design of graphical processing units allows them to run compute considerably faster than a CPU could. While they often operate at slower clock speeds than CPUs, GPUs can possess thousands of cores, allowing them to harness thousands of individual threads at the same time known as parallel computing.
In computationally intensive environments offloading tasks to a GPU is an excellent way minimise pressure on the CPU, mitigating any potential performance bottlenecks.

A significant number of the Big Data tasks which create business value involve constantly repeating the same operations. The huge number of cores available in GPU servers are conducive to this type of work. It is split up between processors to get through voluminous data sets at a faster rate.

GPU servers tend to use less energy in comparison to CPU-only based servers, providing long term reduction in TCO.

Broadberry GPU optimised servers feature up to 3TB of RAM and can be powered by the latest Intel Xeon Scalable processors or AMD EPYC series processors. With a massive range of GPU options available, Broadberry GPU dense servers can be configured with up to 10x NVIDIA Tesla GPU cards, the worlds leading platform for accelerating datacentres. Deployed by many of the planets largest supercomputing centres and enterprises, it utilises GPU accelerators, interconnect technologies, accelerated computing systems, development tools and applications to allow for faster scientific discoveries and big data insights.
At the centre of the NVIDIA Tesla platform is the hugely parallel GPU accelerators that deliver significantly higher throughput for compute-intensive workloads, without a subsequent rise in physical footprint of data centres or an increase in power consumption.
Broadberry GPU servers are built around industry-leading GPU-optimised server chassis which have been designed and rigorously tested to run up to 10x GPUs for massively parallel computing whilst keeping cool due to the latest advances in server cooling technology.
Our online configurator allows you to configure your GPU optimised server with a wide range of powerful processors, RAM options as well as SSD, NVMe or HDD storage options.

GPUs excel at performing massively parallel operations very quickly up to 10x quicker than their counterpart CPUs can. As a GPU is designed to perform parallel operations on multiple sets of data, they can quickly render high-resolution images and 3D video concurrently, analyse big sets of data faster or train your AI application. NVIDIA Tesla based GPU servers are also often used for non-graphical tasks, including scientific computation and machine learning.

The amount of GPUs that a GPU optimised server could be configured with used to be limited by three main factors the number of lanes on the CPU, physical space in the chassis, and the power that the systems power supply could provide. Working closely with our partners, Broadberrys GPU server range utilises the latest technical advances in the industry to allow up to 10x double with GPU cards in a system, or 20x single width cards.


Call a Broadberry Storage & Server Specialist Now: +49 89 1208 5600

Wir melden uns gern zurück




Unser präzises Testing

Alle Broadberry Server- und Storage-Lösungen unterziehen vor Abgang aus unserem Lagerhaus einem 48- Stunden Testlauf. Zusammen mit diesem Prüfungsprozess und sowie die hochqualitativ branchenführende Komponenten, wollen wir sicherstellen, dass alle unserer Server- und Storage-Lösungen den höchst strengen Qualitätsrichtlinien entsprechen, was von uns gefordert wird.


Unvergleichbare Flexibilität

Unser wichtigstes Ziel ist es , hochwertige Server- und Storage-Lösungen zu einem überragendem Preis- und Leistungsverhältnis anzubieten. Wir verstehen, dass jedes Unternehmen verschiedene Anforderungen hat, und wir wollen dabei gern eine unvergleichbare Flexibilität für Ihre individuell angepasste Server- und Storage-Lösungen zur Verfügung stellen und Ihre Anforderungen schnell und effizient erfüllen.

Vertraut von weltweit Top-Marken

Wir haben wir uns als führender Anbieter von Server und Storage-Lösungen in England etabliert und seit 1989, beliefern wir weltweit größten etablierten Weltkonzernen unsere Server und Storage-Lösungen. Zu unserem Kundenstamm gehören :