

H200 - S | |
|---|---|
GPU VM type | |
GPU type | H200 PCIe |
Number of GPUs | 1 |
Dedicated vCPUs | 15 |
RAM | 267 GiB |
Memory | 1,024 GB |
Price per hour
(Pay-as-you-go)
| £2.554
|
H200 - M | |
GPU VM type | |
GPU type | H200 PCIe |
Number of GPUs | 2 |
Dedicated vCPUs | 30 |
RAM | 534 GiB |
Memory | 1,536 GB |
Price per hour
(Pay-as-you-go)
| £5.109
|
H200 - L | |
GPU VM type | |
GPU type | H200 PCIe |
Number of GPUs | 4 |
Dedicated vCPUs | 60 |
RAM | 1,068 GiB |
Memory | 2,048 GB |
Price per hour
(Pay-as-you-go)
| £10.218
|
H200 - XL | |
GPU VM type | |
GPU type | H200 PCIe |
Number of GPUs | 8 |
Dedicated vCPUs | 127 |
RAM | 2,136 GiB |
Memory | 4,096 GB |
Price per hour
(Pay-as-you-go)
| £20.436 |
The Cloud GPU VM is only available via the API in the Frankfurt am Main data centre (de/fra/2).
Do you need a server for your project? Contact our experts today.


With the Cloud GPU VM, you can expand your cloud infrastructure with graphics processors from NVIDIA. These virtual machines enable accelerated computing for workloads where conventional CPUs reach their limits. They benefit from massive parallel processing, which is ideal for compute-intensive tasks in research, industry and development.
The GPU instances are optimised for high performance computing (HPC) and data-intensive processes. Key areas of application include:
Artificial intelligence: Training of deep learning models and fast inference for generative AI
Data science: Accelerated big data analyses and complex simulations
Visualisation: Rendering, 3D modelling and Virtual Desktop Infrastructure (VDI)
Powerful NVIDIA GPUs from the enterprise segment are used. Thanks to pass-through technology, your VMs access the physical resources of the card directly. This guarantees you maximum performance without virtualisation losses – essential for demanding AI models and simulations.
You only pay for the resources you use. Billing is accurate to the minute and usage-based. This gives you full cost control and the flexibility to scale computing power for peak loads at short notice without having to make long-term investments in your own hardware.
Yes, the GPU instances can be seamlessly integrated into your existing infrastructure. For example, combine them with the Compute Engine for scalable container management or IONOS Object Storage for storing large training data. This allows you to create a powerful end-to-end environment for your projects.