Making K3S Stop Hoarding Disk Space

K3S is my favorite small self-contained Kubernetes distribution. To be as infrastructure agnostic as possible I like using a Kubernetes-deployed dynamic storage provider, like Longhorn. The problem: Longhorn won't deploy new volume claims unless there's 25% free disk space, and K3S doesn't start garbage collecting downloaded Docker images until the disk is 85% full. This means you'll eventually end up in a deadlock where you can't deploy new volume claims until you've redeployed enough new deployments to fill up the disk with images so that garbage collection will start.

How to change K3S garbage collection limits

Normally garbage collection in Kubernetes is handled by the Kubelet service. In K3S, Kubelet is integrated into the K3S process. Settings for the internal Kubelet in K3S is in etc/rancher/k3s/config.yaml. This file probably doesn't exist, so create it (it's not the k3s.yaml file).

Add these settings (and then restart k3s):

kubelet-arg:
  - "image-gc-low-threshold=65"
  - "image-gc-high-threshold=75"

(the defaults are 80 and 85)

This will make K3S always do garbage collection when the disk is at 75% full, which works better with the 75% limit in Longhorn to avoid the deadlock.

Manual garbage collection

If you need to quickly free up space, for instance if you can't restart K3S right now, you can do basically the same as when you prune images in Docker.

To list all images that have been pulled:

sudo k3s crictl images

To delete all images that aren't currently used by any running container:

sudo k3s crictl rmi --prune

This freed up about 32 GB of disk right away on my K3S master.

comments powered by Disqus
Find me on Mastodon