Add Local DNS Entries to K3S

There's an age-old practice of adding local DNS entries to your own computer by changing the hosts file (/etc/hosts or C:\Windows\system32\drivers\etc\hosts). But how would you go about it if you need local entries in Kubernetes, specifically K3S?

In my specific case, in a cluster I'm working on the firewall (outside Kubernetes) configuration doesn't allow the nodes reach themselves (or each other) on their external IPs. This becomes a problem when CertManager is requesting a certificate from Let's Encrypt, since it starts the process by verifying the ingress is reachable on the requested domain name (which of course points to the external IP). So we need all of the domain names used by ingresses in the cluster to internally resolve to internal IPs.

One way to do it is just add them to /etc/hosts of each node in the cluster. But that's not going to be very sustainable - every time you add a new node you have to deploy the custom hosts file to it. Worse, every time you add a local host entry you would have to update the hosts file on every node. We need a better solution.

The better solution is to deploy an additional config map for CoreDNS. It looks like this:‌‌

apiVersion: v1
kind: ConfigMap
  name: coredns-custom
  namespace: kube-system
  default.server: | {
        hosts {

Save it to a file called coredns-custom.yaml (or whatever you'd like) and run:

$:>kubectl apply -f coredns-custom.yaml
$:>kubectl -n kube-system rollout restart deployment coredns

This will survive restarts of both nodes and the K3S service, and be global to the cluster, regardless of nodes being added and removed.

There is a similar solution where you specify something.override instead of something.server as a key, but that does not work in K3S (but allegedly in Azure Kubernetes Service).

comments powered by Disqus
Find me on Mastodon