As a dedicated advocate of IPv6, I recently embarked on the task of converting my personal Kubernetes cluster to an IPv6-only environment.
Why?
The growing scarcity of IPv4 addresses is a well-known issue. Cloud providers like AWS are now charging extra for public IPv4 addresses, and many ISPs are resorting to Carrier-Grade NAT (CGNAT) for its end users. The current wait time for new LIRs to acquire a single IPv4 block from RIPE NCC is approaching 500 days.
IPv6, on the other hand, provides a more sustainable solution. With its vast address space, each pod in a Kubernetes cluster can be assigned a globally unique IPv6 address, eliminating the need for NAT altogether. This simplifies network management and ensures that clusters can scale to accommodate future growth without running out of addresses.
IPv6 Support in Kubernetes
While Kubernetes has supported IPv6 since its early days, dual-stack support for both IPv4 and IPv6 was introduced in version 1.20 (December 2020). This raises the question of whether IPv6-only across the Kubernetes ecosystem is mature enough for production environments.
In my experience managing dual-stack clusters, I have encountered instances where IPv6 support for certain features is limited and/or less tested, potentially leading to unexpected issues.
Exploring IPv6-Only Deployments
Intrigued by the prospect of a fully IPv6-only cluster, I set out to deploy one in my lab environment. I chose to utilize Talos Linux as the operating system and Cilium as the Container Network Interface (CNI).
However, I quickly encountered a challenge. Talos attempts to pull its container images from GitHub Container Registry (ghcr.io), which only supports IPv4. This presented two options: rehost the images on an IPv6-enabled registry or implement IPv6-to-IPv4 translation (NAT64).
Despite IPv6’s promise of eliminating the need for costly NAT Gateways, interacting with the legacy internet demands some form of translation.
Choosing the latter option, I opted for a DNS64/NAT64 setup to enable connectivity to the IPv4 internet. While it adds complexity, it ensures that the cluster can communicate with external services that only support IPv4.
Deployment of Cilium and Kube-Router
The deployment of Cilium presented its own set of challenges. Cilium’s IPv6-only support is still under development, and certain features, such as tunneling between nodes, are not fully implemented.
To overcome these limitations, I opted to run Cilium in “native-routing” mode and utilize kube-router to establish BGP routing between the cluster nodes and the external gateway. This configuration is functional but necessitates additional configuration steps, introducing complexity to the overall setup.
The identified issues were too extensive to be included in a single post, but the majority, if not all, are documented as issues.
# Cilium
ipam:
mode: kubernetes
routingMode: native
ipv4:
enabled: false
ipv6:
enabled: true
enableIPv4Masquerade: false
enableIPv6Masquerade: false
# kube-router - We only need bgp and routing
- "--run-router=true"
- "--run-firewall=false"
- "--run-service-proxy=false"
- "--bgp-graceful-restart=true"
- "--enable-cni=false"
- "--enable-ibgp=true"
- "--enable-overlay=false"
- "--peer-router-ips=2001:0DB8:14::1"
- "--peer-router-asns=64513"
- "--cluster-asn=64513"
- "--advertise-cluster-ip=true"
- "--advertise-external-ip=true"
- "--advertise-loadbalancer-ip=true"
- "--enable-ipv4=false"
- "--enable-ipv6=true"
To dynamically assign RouterIDs to nodes, I am currently developing a patch for kube-router that allows it to retrieve the RouterID from node annotations.
Addressing IPv4-Only Clients
A significant portion of internet users still rely on IPv4 connections. To cater to these users, I needed to establish a mechanism for IPv4 traffic to reach services hosted on the cluster.
I initially considered NAT46, which involves translating IPv6 addresses to IPv4 addresses at the network edge. However, this approach discards valuable source IP information, which could be useful for troubleshooting or security purposes.
The chosen approach involved using proxy-protocol, which adds the source IP in the TCP packet without decrypting HTTPS packets. This led to deploying Nginx in ssl-passthrough mode with Traefik as the ingress in the cluster.
Conclusion
The venture into an IPv6-only Kubernetes cluster presented both advantages and challenges. The gained insights from navigating less-traveled paths are likely to prove valuable in the future. While services to accommodate outgoing and incoming IPv4 traffic were still necessary, the cluster itself is now single-stack.
What’s Next?
As the experiment concludes, the question of whether to apply these learnings in a production environment remains open.
Work-in-progress code is available here.