Return to Managed Kubernetes overview

Managed Kubernetes features

Resilience through auto-scaling

Auto-scaling ensures the high availability of your Kubernetes deployments while keeping costs down. It comes into play when a node’s resources aren’t sufficient to run the containers and Kubernetes needs to add more worker nodes to the pool.

Precisely define the number of nodes

The initial quantity, the maximum and the minimum number of nodes within a node pool can all be defined. This reduces costs and stops the service from draining too many resources, while ensuring that the platform has sufficient capacity to start. This is useful if the service has temporary spikes which last for less time than it would take to scale up.

Futureproof storage

Fully integrated persistent data storage is available thanks to integration into the IONOS Cloud ecosystem. The pre-installed CSI-based IONOS storage class allows you to provision stateful applications and stateless web servers. Persistent volumes are automatically created through a persistent volume claim.

Easy integration and automation

For the automation needed in everyday operations, the integration of Kubernetes into CI/CD is done via the Cloud API. IONOS Cloud provides various SDKs and config management tools that simplify the integration of IONOS Cloud and Managed Kubernetes.

Data Center Designer integration

Kubernetes is closely integrated into the IONOS Cloud Data Center Designer (DCD). The browser-based DCD interface provides an intuitive platform for creating Kubernetes clusters, or creating and deleting node pools.

Up-to-date platform

Kubernetes reveals available upgrade versions to enable compatible upgrades for each cluster and node pool. Automatic routines eliminate version incompatibilities and ensure the platform is always up to date, consistent, and secure.

Intelligent architecture

IONOS Cloud provides all the necessary technical interfaces for product automation. Kubernetes is integrated into the DCD as well as the Cloud API, SDKs and Config Management Tools. This means insights into existing managed Kubernetes clusters are possible in the IONOS Cloud ecosystem at any time. For a fault-tolerant architecture in Kubernetes, you can distribute multiple node pools across different data centres. And georedundancy ensures that the control plane is highly available.

Individual network configuration

If the address of a Kubernetes node needs to be known in advance, you can use dedicated IPs specified at node pool level. Kubernetes supports access to node pools via a private network. This allows you to disable the cloud's DHCP feature and set up your own DHCP server to individually control the assignment of IP addresses.

Access to Kubernetes objects

You get full access to the Kubernetes API at the cluster admin level. To access your clusters, simply download the configuration file in your preferred format. For maximum security, you can specify an IP whitelist through which you access the Kubernetes API. Likewise, you can store audit logs for accessing the Kubernetes API in an S3 bucket you define.

Labels and annotations

Use labels and annotations to provide Kubernetes objects with metadata. Labels can be used to identify and select objects in a specific context. Annotations help make an object available to other resources during collaboration.

Integration of cloud-native solutions

Managed Kubernetes is a full vanilla Kubernetes in the CNCF sense. Kubernetes works most effectively when it can interoperate – efficiently connected via APIs – with many complementary services such as istio, linkerd, Prometheus, Traefik, Envoy, fluentd, and rook. As a Compute Engine user, you can simply install complementary services yourself as needed.

Private container registry

Via your Private Container Registry, you and your team will soon be able to centrally manage Docker images, security analytics, and access rights. Integrating existing CI/CD structures will allow you to set up fully automated Docker pipelines.

Faster node deployment

The faster nodes are deployed, the more progress you and your team can make. It’s our goal to reduce node deployment times even further. The IONOS Cloud development team is working on it and the feature will be available soon.

Resilience through auto-scaling

Auto-scaling ensures the high availability of your Kubernetes deployments while keeping costs down. It comes into play when a node’s resources aren’t sufficient to run the containers and Kubernetes needs to add more worker nodes to the pool.

Precisely define the number of nodes

The initial quantity, the maximum and the minimum number of nodes within a node pool can all be defined. This reduces costs and stops the service from draining too many resources, while ensuring that the platform has sufficient capacity to start. This is useful if the service has temporary spikes which last for less time than it would take to scale up.

Futureproof storage

Fully integrated persistent data storage is available thanks to integration into the IONOS Cloud ecosystem. The pre-installed CSI-based IONOS storage class allows you to provision stateful applications and stateless web servers. Persistent volumes are automatically created through a persistent volume claim.

Easy integration and automation

For the automation needed in everyday operations, the integration of Kubernetes into CI/CD is done via the Cloud API. IONOS Cloud provides various SDKs and config management tools that simplify the integration of IONOS Cloud and Managed Kubernetes.

Data Center Designer integration

Kubernetes is closely integrated into the IONOS Cloud Data Center Designer (DCD). The browser-based DCD interface provides an intuitive platform for creating Kubernetes clusters, or creating and deleting node pools.

Up-to-date platform

Kubernetes reveals available upgrade versions to enable compatible upgrades for each cluster and node pool. Automatic routines eliminate version incompatibilities and ensure the platform is always up to date, consistent, and secure.

Intelligent architecture

IONOS Cloud provides all the necessary technical interfaces for product automation. Kubernetes is integrated into the DCD as well as the Cloud API, SDKs and Config Management Tools. This means insights into existing managed Kubernetes clusters are possible in the IONOS Cloud ecosystem at any time. For a fault-tolerant architecture in Kubernetes, you can distribute multiple node pools across different data centres. And georedundancy ensures that the control plane is highly available.

Individual network configuration

If the address of a Kubernetes node needs to be known in advance, you can use dedicated IPs specified at node pool level. Kubernetes supports access to node pools via a private network. This allows you to disable the cloud's DHCP feature and set up your own DHCP server to individually control the assignment of IP addresses.

Access to Kubernetes objects

You get full access to the Kubernetes API at the cluster admin level. To access your clusters, simply download the configuration file in your preferred format. For maximum security, you can specify an IP whitelist through which you access the Kubernetes API. Likewise, you can store audit logs for accessing the Kubernetes API in an S3 bucket you define.

Labels and annotations

Use labels and annotations to provide Kubernetes objects with metadata. Labels can be used to identify and select objects in a specific context. Annotations help make an object available to other resources during collaboration.

Integration of cloud-native solutions

Managed Kubernetes is a full vanilla Kubernetes in the CNCF sense. Kubernetes works most effectively when it can interoperate – efficiently connected via APIs – with many complementary services such as istio, linkerd, Prometheus, Traefik, Envoy, fluentd, and rook. As a Compute Engine user, you can simply install complementary services yourself as needed.

Private container registry

Via your Private Container Registry, you and your team will soon be able to centrally manage Docker images, security analytics, and access rights. Integrating existing CI/CD structures will allow you to set up fully automated Docker pipelines.

Faster node deployment

The faster nodes are deployed, the more progress you and your team can make. It’s our goal to reduce node deployment times even further. The IONOS Cloud development team is working on it and the feature will be available soon.