How we built an Internal Developer Platform on Exoscale SKS with Karpenter
A hands-on build of a full GitOps Internal Developer Platform — ArgoCD, Crossplane, Karpenter, Tailscale — running on Exoscale SKS, 100% European cloud.

Introduction
At Gravitek, we help companies design and run Internal Developer Platforms (IDPs) — the kind of tooling that gives development teams a standardized, self-service foundation for shipping software without drowning in infrastructure complexity.
When Exoscale offered us the opportunity to test their Scalable Kubernetes Service (SKS) and its new Karpenter integration, we decided to build a concrete example: a fully functional IDP running entirely on Exoscale, showcasing what SKS and Karpenter can do in a real-world platform engineering scenario.
This article walks through what we built and how we leveraged SKS features. Whether you are evaluating Exoscale for your Kubernetes workloads or curious about what a modern IDP looks like in practice, this should give you a solid picture.
What is an Internal Developer Platform?
Before diving in, a quick primer for those unfamiliar with the concept.
An Internal Developer Platform (IDP) is a set of tools and automations that form a technical foundation for development teams. Instead of having every team reinvent the wheel — setting up their own CI/CD pipeline, monitoring stack, and Kubernetes deployments — an IDP provides "golden paths": curated, secure, and maintained workflows managed by a platform team.
Think of it as a product built for your developers. They get self-service capabilities (spin up a database, deploy an app) while the platform team ensures everything stays consistent, secure, and well-governed.
The Gravitek IDP: Architecture Overview
To put SKS through its paces, we designed a GitOps-driven, self-service IDP that exercises many of the platform's capabilities: managed Kubernetes, Karpenter autoscaling, Cilium networking, and integration with the broader cloud-native ecosystem. Here is what we deployed.
Cluster setup
We provisioned an Exoscale SKS Pro cluster in the ch-gva-2 (Geneva) zone, using Cilium as the CNI (eBPF-based, replacing kube-proxy). The infrastructure is managed with Terraform and the Exoscale provider.
The cluster uses a two-tier node pool strategy:
- A static infrastructure pool (1x standard.small node, tainted) that runs always-on platform tools: ArgoCD, Crossplane with its providers (Exoscale, OVH, Scaleway, Kubernetes), CloudNativePG Operator, External Secrets Operator, Tailscale Operator, Trivy Operator, and Victoria Metrics with Grafana.
- A Karpenter-managed workload pool (0 to N nodes) that scales dynamically for application workloads, including Backstage and its PostgreSQL database (managed by CloudNativePG).
Terraform state is stored in Exoscale SOS (S3-compatible object storage), keeping the whole infrastructure lifecycle within the Exoscale ecosystem.
This separation ensures platform tools are always available, while workload nodes scale (and scale to zero) based on actual demand.
Platform components
Everything is deployed via ArgoCD using Helm charts, following a pure GitOps workflow. Here is what runs on the cluster:
- ArgoCD — GitOps continuous delivery. Watches our Git repositories and applies changes automatically.
- Crossplane — Cloud resource provisioning engine with providers for Exoscale, OVH, Scaleway, and Kubernetes. Lets developers request infrastructure (PostgreSQL databases, virtual machines) through Kubernetes-native APIs using a custom
platform.gravitek.ioAPI group. - External Secrets Operator — Syncs secrets from Infisical (EU instance) into Kubernetes. Secrets never touch Git.
- Tailscale Operator — Provides VPN-only access to all platform services. More on this below.
- Victoria Metrics + Grafana — Observability stack for metrics and dashboards.
- Trivy Operator — Continuous container image vulnerability scanning.
- CloudNativePG — In-cluster PostgreSQL operator, used as the database backend for Backstage.
- Backstage — The developer portal, providing a service catalog, scaffolder templates, TechDocs, Kubernetes cluster visibility, Crossplane resource catalog, automatic template generation from XRDs, GitHub SSO, and DORA metrics tracking.
- Karpenter — Exoscale-managed node autoscaler. The star of this article.
Developer workflow
The developer experience works like this:
- A developer goes to Backstage and picks a software template (e.g., "provision a PostgreSQL database").
- Backstage generates a Crossplane claim and commits it to the claims repository via a pull request.
- An auto-merge CI workflow validates and merges the claim.
- ArgoCD picks up the change and applies it to the cluster.
- Crossplane translates the claim into actual cloud resources (e.g., an Exoscale managed database).
- The resource appears in the Backstage catalog, visible and documented.
Each resource supports T-shirt sizing (small, medium, large) with provider-specific mappings, and developers can target different cloud providers via label selectors — making multi-cloud provisioning a first-class capability of the platform.
No tickets, no manual provisioning. Just Git and self-service.
Karpenter on SKS: Smart Autoscaling
Karpenter was the SKS feature we were most eager to put to the test — and the main reason this example is worth sharing.
What is Karpenter and why it matters
Karpenter is a next-generation Kubernetes autoscaler. Unlike the classic Cluster Autoscaler — which scales predefined node groups up or down — Karpenter provisions individual nodes based on the actual resource requirements of each pod. It picks the best instance type, reacts faster to load changes, and aggressively consolidates underutilized capacity.
On Exoscale SKS, Karpenter is available as a managed add-on on Pro clusters. No manual installation, no IAM configuration to set up — you enable it at cluster creation and Exoscale handles the rest.
Our configuration
We use Karpenter with a straightforward two-pool approach:
The static infrastructure pool is a classic SKS node pool (1 node, tainted for platform tools). This is not managed by Karpenter — it runs the always-on components that need to be available before Karpenter can even scale up workload nodes.
The workload pool is entirely Karpenter-managed. It uses two Kubernetes CRDs:
- ExoscaleNodeClass: Defines Exoscale-specific parameters (image template, disk size, security groups).
- NodePool: Defines scaling constraints (allowed instance types, resource limits, consolidation policy).

The key benefit for us: scale-to-zero. When no application workloads are running (which is most of the time for a demo/testing platform), the workload pool drops to zero nodes. When someone deploys something — Backstage, a test application — Karpenter spins up the right-sized node in seconds.
What we observed
- Fast provisioning: New nodes come up quickly, which was a pleasant surprise compared to our experience with Cluster Autoscaler.
- Consolidation works well: When load decreases, Karpenter properly drains nodes (respecting Pod Disruption Budgets) before terminating them. No sudden workload disruption.
- Drift mechanism: When we upgraded the cluster's Kubernetes version, Karpenter automatically used the new version for newly provisioned nodes. No manual image updates needed.
- Cost savings: For a platform that is not running 24/7, scale-to-zero is a game changer. We only pay for the static infrastructure node when no workloads are active.
The fact that Karpenter comes pre-installed and pre-configured on SKS Pro clusters saved us significant setup time compared to doing it ourselves on a bare Kubernetes cluster.
Challenge: Zero Public Ingress with Tailscale
One design choice we made early on: no service should be exposed publicly. ArgoCD, Grafana, Backstage — these are internal tools that have no business being on the public internet.
The traditional approach and its problems
Typically, you would set up a gateway API, a load balancer, TLS certificates, and then add authentication layers on top. On a cloud provider, this means extra costs (load balancer billing) and a larger attack surface.
Our approach: Tailscale Operator
Instead, we deployed the Tailscale Operator on the cluster. Tailscale creates a WireGuard-based mesh VPN that connects authorized users directly to cluster services — without exposing any public endpoint.
Each service that needs to be accessible gets a Tailscale hostname:
- ArgoCD:
argocd-server.<tailnet>.ts.net - Grafana:
grafana.<tailnet>.ts.net - Backstage:
backstage.<tailnet>.ts.net
These hostnames are only reachable from devices on the Tailscale network. Tailscale handles identity-based authentication (SSO), end-to-end WireGuard encryption, and automatic TLS certificates via Let's Encrypt.

Why this matters
- Zero public attack surface: Nothing is reachable from the internet.
- No load balancer costs: Tailscale replaces the traditional ingress + NLB pattern entirely, which is a real cost saver.
- Simple configuration: No complex network rules or firewall management. Access is identity-based — if you are on the Tailscale network and authorized, you can reach the service.
- No single point of failure: Tailscale is a mesh network, not a centralized VPN gateway.
This approach fits into a broader security model that also includes secret management through Infisical (EU instance, Machine Identity authentication with read-only scope), continuous image scanning via Trivy, node isolation through dedicated tainted pools, and kubeconfig certificate rotation every 30 days.
On Exoscale specifically, this worked without any particular issue. The Tailscale Operator runs on the tainted infrastructure node and integrates cleanly with the SKS networking model.
A European Cloud, A Real Alternative
Exoscale is a European cloud provider, hosted in datacenters across Europe, with native GDPR compliance. In a context where digital sovereignty is becoming a strategic concern for many organizations, this is worth highlighting.
Our experience shows that SKS is a mature offering, capable of running serious workloads. Karpenter brings the level of automation you would expect from a modern managed Kubernetes service, and it all runs on 100% European infrastructure. This is no longer a trade-off between sovereignty and technical capability — it is a credible alternative to American hyperscalers.
Conclusion
Building this IDP on Exoscale SKS confirmed what we hoped: you can run a serious, production-grade platform engineering stack on a European cloud without compromise. SKS is solid, Karpenter delivers real autoscaling benefits out of the box, and the overall experience is smooth enough that we would confidently recommend it for teams looking at sovereign Kubernetes options.
We’ve demonstrated just one way to use SKS and Karpenter — yours might look very different depending on your workloads. The point is that the building blocks are there, and they work well together.
In terms of budget, the current configuration enables the deployment of a comprehensive Internal Developer Platform, with initial operating costs under 100€ per month.
If digital sovereignty matters to your organization and you are looking to build or modernize your internal developer platform, Gravitek can help you get there. Let's talk.