7 minutes
Running Home-Assistant in Kubernetes
! Note at the time of writing this is working on the following versions.
- Home-Assistant: 2024.9.1
- Kubernetes: v1.30
Background
At home I run all of my services inside a local Kubernetes cluster. Why? Because I work professionally with Kubernetes day in and out and I’d rather work with manifests in a Gitops workflow then working with an orchestration service such as Salt or Ansible.
Different integrations in Home-Assistant have different needs. For example Multicast traffic is used in discovery of different integrations. Since the pod inside Kubernetes is in a overlay network, it never receives these messages. Homekit devices will also use this same multicast network, but they have a secondary requirement that both devices have to be on the same /24 network (Samsung televisions have a similar networking requirement).
I’m my home network I also have 3 VLANs.
VLAN | IP Block | Description |
---|---|---|
10 | 10.0.1.0/24 | Default vlan for personal devices. |
20 | 10.0.2.0/24 | VLAN for servers like my Kubernetes hosts. |
30 | 10.0.3.0/24 | IOT VLAN for devices like televisions. |
This adds a second layer of complexity because multicast traffic by default doesn’t traverse vlans. So in order support discovery and be able to talk to all the devices we need to be able to:
- Get Home-Assistant running in Kubernetes.
- Somehow get multicast traffic into the Home-Assistant pod.
- Allow the Home-Assistant pod access to all the VLANS.
- Get Home-Assistant listening on the VLAN interfaces.
Creating a Home-Assistant Deployment
I’m using Kustomize to deploy my Home-Assistant instance. The following is a basic example and what we’ll be using as a base going forward. When you’re done you’ll have a Home-Assistant pod running inside the home-automation
namespace.
> kubectl get deployments -n home-automation
NAME READY UP-TO-DATE AVAILABLE AGE
home-assistant 1/1 1 1 6h13m
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: home-automation
resources:
- deployment.yaml
- pvc.yaml
- service.yaml
- ingress.yaml
images:
- name: homeassistant/home-assistant
newTag: 2024.9.1
commonLabels:
app.kubernetes.io/name: home-assistant
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-assistant
spec:
template:
spec:
containers:
- image: homeassistant/home-assistant:latest
imagePullPolicy: IfNotPresent
name: home-assistant
ports:
- containerPort: 8123
name: http
protocol: TCP
volumeMounts:
- mountPath: /config
name: config
dnsPolicy: ClusterFirst
volumes:
- name: config
persistentVolumeClaim:
claimName: home-assistant-config
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: home-assistant-config
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
service.yaml
apiVersion: v1
kind: Service
metadata:
name: home-assistant
spec:
ports:
- name: http
port: 8123
protocol: TCP
targetPort: http
type: ClusterIP
ingress.yaml
I’m using my own TLD, DNS configured pointing to my ingress, and using cert-manager to generate certificates for my service. Not all of this is strictly needed, but generally its a good idea even in local installs.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
name: home-assistant
spec:
ingressClassName: nginx
rules:
- host: home-assistant.example.tld
http:
paths:
- backend:
service:
name: home-assistant
port:
number: 8123
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- home-assistant.example.tld
secretName: home-assistant-example-tld
Networking
Now that we have a deployment of Home-Assistant up and running we need to tackle the networking requirements. Since we’re running in an overlay network we need to somehow break the boundary and allow the traffic directly to the pod. The easiest way to do that is to to use the Multus CNI to inject a network interface that directly talks to the VLAN network. Doing this will allow all the traffic directly to the pod, including Multicast traffic.
Node Configuration
In order to get your pod to talk to the network you’ll have to configure your Kubernetes nodes to give them access to the VLAN. In Debian this requires setting up a bridge interface on the node itself. After installing bridge-utils
you can do a configuration similar to the following for each vlan.
/etc/network/interfaces.d/vlan30
auto enp3s0.30 # interface.vlan
iface enp3s0.30 inet manual
vlan-raw-device enp
auto vlan30
iface vlan30 inet manual
bridge_ports enp3s0.30
Pod Network Configuration
Refer to the Multus docs for installation instructions. In my case I’m running RKE2 and installing it is as simple as modifying /etc/rancher/rke2/config.yaml
on each node to add it to the cni
section.
cni:
- multus
- canal
If you’re running a multi node cluster you should also run Whereabouts to keep track of IP allocation between nodes.
To be able to inject IPs into the pods you’ll need to create a NetworkAttachmentDefinition
that configures the network settings.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: iot
namespace: kube-system
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "vlan30",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"range": "10.0.3.0/24",
"range_start": "10.0.3.50",
"range_end": "10.0.3.75",
"gateway": "10.0.3.2",
"routes": [{"dst": "10.0.3.0/24", "gw": "10.0.3.2"}]
}
}'
The above won’t do anything on its own however, we need to modify our deployment.yaml
file to reference this new config. To do this all you need to do is add a pod annotation.
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-assistant
spec:
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: kube-system/iot
<snip>
Your pod should restart and if you exec into it kubectl exec -it -n home-automation home-assistant-<id> -- /bin/bash
you should get something along these lines (I have other networks as well, but we’re focusing on vlan30. All of the steps above can be modified to apply more then one network if needed).
kubectl exec -it -n home-automation home-assistant-6b5bd9d796-hpb7s -- /bin/bash
home-assistant-6b5bd9d796-hpb7s:/config# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP qlen 1000
link/ether 7e:ef:54:cd:bd:1b brd ff:ff:ff:ff:ff:ff
inet 10.42.2.30/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::7cef:54ff:fecd:bd1b/64 scope link
valid_lft forever preferred_lft forever
3: net1@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether da:07:36:c4:ff:a8 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.50/24 brd 10.0.3.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::d807:36ff:fec4:ffa8/64 scope link
valid_lft forever preferred_lft forever
4: net2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 5e:54:3b:b1:96:a2 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.50/24 brd 10.0.1.255 scope global net2
valid_lft forever preferred_lft forever
inet6 fe80::5c54:3bff:feb1:96a2/64 scope link
valid_lft forever preferred_lft forever
You can see that my pod now has a network interface on my IOT vlan (10.0.3.50). 🚀
Home-Assistant Configuration
Now that we have the IPs bound to the pod you would think it would all magically work. I did as well, but I ended up spending a fair amount of time trying to figure out why discovery in Home-Assistant wasn’t working at this point. Home-Assistant is listening on the network interface, but discovery never finds any devices. That is because by default Home-Assistant only listens for Multicast traffic on the default interface and you need to enable the newly added ones.
The settings for this are hidden away.
- Login to Home-Assistant
- View your profile (bottom left button with your name on it.)
- Scroll down to
Advanced mode
and enable it. - Go into
Settings
->System
->Network
- Under
Network adapter
uncheckAuto Configure
- Enable the new interfaces and click
SAVE
Eventually (you may need to restart the service) you should see a notification saying new devices have been discovered.
Gotchas
Now that you’re applying an IP to the pod, the normal kubernetes rollout may not work. This is because the IP can’t be assigned to the new pod because its already in use. To fix this you need to change the rollout strategy in your deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-assistant
spec:
strategy:
type: Recreate
Conclusion
This isn’t a supported install by any means, and you may run into other issues with some integrations. I’ve been using this method for well over a year without any issues. You may have to take into account other things like how you’re going to run any USB dongles and how you’re going to apply affinities for them. If you have a background in Kubernetes all of that is completely manageable however.