Kubernetes - Routing external traffic to local Kubernetes cluster
Table of Contents
This post is part of our ongoing DevOps series focused on Kubernetes and GitOps practices. So far, we’ve explored various aspects of DevOps with FluxCD, and our upcoming posts will cover application deployment strategies in Kubernetes.
A significant challenge when working with a local Kubernetes environment (like our K3s setup) is the lack of public network access, which is automatically handled in managed Kubernetes offerings from cloud providers. For locally hosted clusters, establishing secure public connectivity is essential for many real-world scenarios.
This post addresses this specific challenge: how to expose services from your local Kubernetes cluster to the internet. The solution we’ll implement involves a public-facing cloud server with HAProxy for Layer 4 load balancing, connected to our local cluster nodes through an encrypted Wireguard tunnel. This networking foundation will enable several advanced configurations in future posts, including:
- Exposing applications to the public internet through Ingress
- Setting up automatic DNS management with external-dns
- Implementing automated TLS certificate provisioning with cert-manager
- Configuring authentication and access control for your exposed services
Note: This setup is primarily intended for homelab or self-hosted Kubernetes environments. If you’re using a managed Kubernetes service like GKE, EKS, or AKS, this specific configuration is unnecessary as these platforms provide their own load balancing and ingress solutions.
1. Setup Wireguard on Server
First, install the Wireguard package:
1apt install -y wireguard
Generate a cryptographic key pair:
1wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey
Configure the wg0 interface by creating /etc/wireguard/wg0.conf
1[Interface]
2PrivateKey = ***
3Address = 10.0.0.1/24
4ListenPort = 51820
5SaveConfig = true
6
7[Peer]
8PublicKey = ***
9AllowedIPs = 10.0.0.2/32
10PersistentKeepalive = 25
11
12[Peer]
13PublicKey = ***
14AllowedIPs = 10.0.0.2/32
15PersistentKeepalive = 25
Activate and enable the Wireguard service:
1systemctl enable --now wg-quick@wg0.service
Verify that the interface is operational:
1ip link list
2
31: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
4 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
52: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
6 link/ether fa:16:3e:d7:81:4e brd ff:ff:ff:ff:ff:ff
7 altname enp0s3
83: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
9 link/none
As we can see, the wg0 interface is successfully established.
2. Setup Wireguard on Kubernetes Nodes
For our Kubernetes nodes, we’re using MicroOS with K3s. Install the Wireguard tools:
1transactional-update pkg in wireguard-tools
Generate key pairs and configure the wg0 interface by creating /etc/wireguard/wg0.conf on each node:
On Node 1:
1[Interface]
2PrivateKey = ***
3Address = 10.0.0.2/24
4
5[Peer]
6PublicKey = ***
7Endpoint = ***:51820
8AllowedIPs = 0.0.0.0/0
9PersistentKeepalive = 25
On Node 2:
1[Interface]
2PrivateKey = ***
3Address = 10.0.0.2/24
4
5[Peer]
6PublicKey = ***
7Endpoint = ***:51820
8AllowedIPs = 0.0.0.0/0
9PersistentKeepalive = 25
Enable and start the Wireguard interface:
1systemctl enable --now wg-quick@wg0.service
Verify connectivity from each node to the server:
1ping 10.0.0.1
2
3PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
464 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=138 ms
564 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=67.4 ms
664 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=68.0 ms
Verify connectivity from the server to each node:
1ping 10.0.0.2
2
3PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
464 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=71.6 ms
564 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=68.7 ms
664 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=68.1 ms
1ping 10.0.0.3
2
3PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
464 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=81.8 ms
564 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=78.9 ms
664 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=77.9 ms
3. Configure HAProxy Layer 4 LoadBalancer
Install HAProxy on the server:
1apt install -y haproxy
Configure HAProxy by editing /etc/haproxy/haproxy.cfg
1frontend http_frontend
2 bind *:80
3 mode tcp
4 option tcplog
5 default_backend http_backend
6
7backend http_backend
8 mode tcp
9 balance roundrobin
10 option tcp-check
11 server node1 10.0.0.2:80 check
12 server node2 10.0.0.3:80 check
13
14frontend https_frontend
15 bind *:443
16 mode tcp
17 option tcplog
18 default_backend https_backend
19
20backend https_backend
21 mode tcp
22 balance roundrobin
23 option tcp-check
24 server node1 10.0.0.2:443 check
25 server node2 10.0.0.3:443 check
Restart HAProxy to apply the configuration:
1systemctl restart haproxy
Verify that traffic is properly forwarded by sending a request to the server’s public IP address from our local machine:
1curl http://$PUBLIC_IP
2
3404 page not found
The “404 page not found” response confirms that traffic is successfully reaching our Kubernetes cluster. This error is expected and actually indicates success, as the request is properly reaching the Kubernetes ingress controller, which doesn’t have any routes configured yet.
Conclusion
We have successfully set up a secure tunnel between our public-facing server and our local Kubernetes cluster nodes using Wireguard. The HAProxy configuration routes all HTTP and HTTPS traffic to our cluster nodes in a load-balanced manner. This setup allows us to expose services from our local Kubernetes cluster to the internet with minimal configuration.
This foundation is critical for the GitOps-driven application deployments we’ll be implementing in upcoming posts in this series. With this networking infrastructure in place, we can now focus on automating application deployments with FluxCD while ensuring they’re securely accessible from the internet. Stay tuned for our next post on configuring Push based deployments with FluxCD, configuring external-dns and cert-manager to work with this setup.