Running Home Assistant on Kubernetes. It is about time to put your IoT devices under control. Everything runs off the cloud on my on-premise Kubernetes. Clean and secure.
I created a separate VLAN just for IoT devices on my OpenWRT. Some things to notice about this VLAN.
A separate AP exists for devices communicating over WiFi.
This VLAN does have access neither to other networks nor to the Internet. So no IoT device will be able to phone home. And most probably you cannot use the official Apps.
The host running Kubernetes does have a network connection in both VLANs (optional).
If you would like to create VLANs but don’t know how to start I highly recommend watching Mark’s great video about VLANs in OpenWrt 21. It will give you a head start.
Use K3s lightweight Kubernetes distribution for IoT
Although there are many Kubernetes distributions to choose from, I have picked K3s.
It is a lightweight Kubernetes and it is proven to work on a bunch of inexpensive unavailable Raspberry Pi4s running in a basement just like Jeff’s Pi Dramble. Make sure to check him out, it is unbelievable. 🙂
The Deployment problem of Home Assistant
There are a couple of supported ways of installing Home Assistant. Unfortunately Kubernetes is not among them. I do not blame them. Probably it simply does not worth the effort to support it, because people using Home Assistant is already a niche market, even more people running Kubernetes at home.
There are 4 options to install it.
Home Assistant Operating System – their own operating system based on Docker (no go)
Home Assistant Container – A container image containing only the core of Home Assistant (we’ll use this)
Home Assistant Supervised – “This way of running Home Assistant will require the most of you. It also has strict requirements you need to follow.” (still requires Docker’s socket – no go)
Home Assistant Core – Same as Home Assistant Container but without the container (no go)
The biggest drawback of using their container image is that there will be no “Addons store” feature. And you will need to find the necessary howtos in forums, github issues, etc.
So I will rely on their container image but what about deployment descriptions?
I cannot really use their docker-compose.yml in Kubernetes for other purposes than as a base for creating my own manifests.
There were Helm charts available for some time but at the time of writing this article, none of them are maintained anymore but at least they are better source for creating something new than using the docker-compose YMLs. I decided to roll my own version of manifest and keeping things as simple as possible.
Installing Home Assistant with Kustomize
So I have the official container image but I will need services other than the plain Core of Home Assistant. I need the following functionalities.
MQTT support (Mosquitto) which is a frequently required integration interface for lots of IoT devices. You can download the Kubernetes manifest from abalage/mosquitto-mqtt-k8s.
Nginx Ingress Controller to expose HTTP(s) and MQTT’s TCP port 1883 (later one is not HTTP protocol)
MetalLB for acquiring an externally addressable IP address for Home Assistant’s service (LoadBalancer)
Like I mentioned before the Helm charts are outdated and I needed some customization anyway, hence I use Kustomize to get all the dependencies properly configured and patched to fit my network environment.
You do not need to individually download the manifests. Just check out the GitHub project abalage/Home-Assistant-on-Kubernetes which contains everything you need to start.
Make sure you edit `kustomization.yaml` to fit to your environment. Check out the README for details.
Now you can run your own Home Assistant on Kubernetes.
Deploy Elasticsearch stack with podman and Ansible. Halfway on the road towards complete automation. But without the necessity of a complex orchestration tool. Somewhere between pets and cattles.
There is an existing Ansible collection containers.podman to handle podman pods and containers. Although Elastic the company already maintains an Ansible playbook for Elasticsearch, it uses regular Linux packages and not container images. Meet abalage.elasticstack_podman a collection of Ansible roles to deploy and handle an Elasticsearch cluster and its components like Kibana, Filebeat, Metricbeat and Logstash.
Any operating system which supports a relatively recent version of podman (>=3.0) is required. Beware that CentOS 7 is not among them. The playbooks were tested on AlmaLinux 8.4 and OpenSUSE Leap 15.3. However on OpenSUSE you need to use a third party repository (Virtualization_containers).
The collection does not contain a reverse proxy for Kibana. You can use either Traefik of NGINX. The Kibana container is already provides labels for Traefik.
Features
I implemented the following features.
It deploys an Elasticsearch cluster. Works with single node deployments. However you can build a cluster of multiple nodes as well. You can even run multiple nodes on the same host OS.
Use Kibana for visualization.
Metricbeat automatically collects and stores all components metrics in the cluster. Use Kibana’s Stack Monitoring app to access the metrics.
Filebeat sends the components logs to Elasticsearch. Use Kibana’s Logs app to access the logs.
Optionally you can set up Logstash containers too. Although there are not many pipeline templates available.
Automatically populates built-in and custom users, passwords and roles. It does not support AD integration yet.
Pods and containers are automatically started upon reboot by using systemd units.
Supports host firewalld. Disabled by default.
Works best with host networking. Support for bridge networking is best effort and has scalability limitations. It does not support rootless networking at the moment.
Usage of the collection
I expect you already have an Ansible control node and several managed hosts. The collection was developed and tested with Ansible 2.9.
Create your deployment playbook
A playbook defines which play, roles and tasks of the collection are executed on which hosts. There is an existing playbook called elk-podman-deployment you can use. For example there is an example playbook in the repository too.
Create your deployment inventory
The deployment inventory describes how your cluster looks like. You can use the variables from the role’s defaults to create an inventory form scratch. However I provide a example inventory that you can customize.
Do not forget to encrypt sensitive data with ansible-vault.
I highly recommend to create proper X.509 certificates for TLS for security reasons. Make sure to follow the Securing Elasticsearch cluster guide to create such certificates.
Run the playbook
Once the inventory is complete, you can run the playbook like this tot deploy Elasticsearch stack with podman and Ansible.
It is a good idea to run in check mode on the first run to see whether is there anything missing from the inventory.
Reverse proxy
The collection itself does not provide any reverse proxy.
You can use any kind of reverse proxy to provide access to Kibana or any other components. I suggest to use Traefik for auto-discovery.
Conclusion
Developing all these roles and task were fun. I could learn a lot about Ansible. Therefore I can recommend this collection to anyone who would need such a setup but without the requirement of having a complex orchestration platform. I am aware of production systems deployed by this playbook.
However. I think this approach on the long run is not feasible. The architecture can grow to became uncontrolled pretty easily, unless someone constantly maintains the collection and provides support.
I could think of better alternatives like incorporating the container parts into Elastic’s official Ansible playbook. So the support would come from the vendor and not from the community. It might also worth to try some Edge/IoT oriented Kubernetes distribution like K3s which is lightweight but also supports Helm charts or better Operators.
Install OpenSUSE MicroOS in KVM with Ignition. A step-by-step guide to provision container specific OS instances really fast.
About OpenSUSE MicroOS
I needed a container specific OS since I converted my docker-compose services to pods with Podman. Fedora CoreOS looks promising. However I have been using OpenSUSE for years, so it was convenient for me to try MicroOS which is derived from Tumbleweed.
These are the features which I like the most.
Automatic transactional updates. If you do not like the result, then you can switch back to the previous snapshot.
Up to date software versions. Especially for software like Podman which improves rapidly.
There are documentations about MicroOS, but I could not find a complete guide about how to install OpenSUSE MicroOS in KVM. Also its Ignition guide directly redirects to CoreOS’s git repo for documentation. The information is there but putting it together takes time. As I already managed to do it then why not to share it? 🙂
In my guide I will use virsh (libvirt) and virt-install (virt-manager) to provision headless MicroOS VMs based on the downloadable KVM images they made available. Both tools are higher lever APIs to KVM / QEMU.
Installing libvirt to make using QEMU easy
OpenSUSE has a package pattern for turning your OS into a virtualization host. Following the Virtualization Guide will definitely help. But if you do not want to read it all then just run the following command.
zypper in -t pattern kvm_server kvm_tools
Should you want to read more about managing virtual machines with libvirt then there is a documentation for that too. I will not go into details this time.
Downloading and verify installation media of MicroOS
Once the tools are ready then download the installation media and verify the checksum and signature.
Note that sha256 checksum and signature was made for a snapshot whose name is different from the file we downloaded, though the content is the same. Probably the other files on their webserver are just symbolic links.
Ignition expect its configuration to be in JSON. However one just not create a JSON file by hand. But creates a YML file and convert it with semantic checks (and some boilerplate) to JSON by using CoreOS’s fcct. Here is my example. It is pretty straight forward.
Please note that on OpenSUSE you cannot just provide static DNS information by overwriting /etc/resolv.conf. Because the content of /etc/resolv.conf is managed by netconfig. And it configuration file bigger than optimal to include it in a YML file. Though you can configure it manually after the first run.
Once you are ready, put the contents into config.fcc and convert it to JSON by using fcct.
As soon as I managed to convert docker-compose services to pods I realized that managing them will be not as easy as it was with docker-compose. I faced with the following problems.
There is no autostart for pods and containers. Of course you can generate systemd service units for all components but managing them is not easy without an automation tool. There are separate service files for each pod and container.
For updates you might have to stop, remove, recreate everything from scratch, unless you script it. There is no ‘up‘, ‘down‘ or ‘build‘ features like you have with docker-compose.
There is no single point of configuration which I could use to describe all the pods and containers. I could write and maintain Kubernetes YAML files, but that’s even harder than using the CLI syntax I am already familiar with.
I needed a tool which makes managing Podman pods easier. podman-compose looked promising but it did not really work for me and I also did not like its CLI interface. So I decided to write my own tool.
Design goals of pods-compose
I did not want to put a lot of effort into this. I only wanted the following additional abilities.
Be able to automatically start and stop all pods and containers upon reboot.
Tear down existing pods at once.
Create pods and containers from a description at once.
Build all the images I define with a single command.
I did not want to rely on docker-compose’s YAML format. Intentionally there is no support for using an existing compose configuration. Although I was already familiar with that format, I wanted a complete migration not just a partial one.
Managing Podman pods with pods-compose
As a kickstart, let’s get a glimpse into the similarities between pods-compose and docker-compose.
pods-compose
docker-compose
Deploy pod(s)
–up [POD]
up [SERVICE]
Tear down pod(s)
–down [POD]
down [SERVICE]
Start pod(s)
–start [POD]
start [SERVICE]
Stop pod(s)
–stop [POD]
stop [SERVICE]
Restart pod(s)
–restart [POD]
restart [SERVICE]
Build all container images
–build
build
Status of pods and containers
–ps
ps
Generate Kubernetes Pod YAML(s)
–generate
Autostart pods and containers
Podman can generate systemd units for pods and containers. However there will be many of them, making it hard to overview and maintain it. Because pods-compose takes care of starting and stopping pods with a single command line option, I could create a single systemd service file instead of many.
The install script will deploy that systemd service file for pods-compose. Enabling it makes your pods and containers to start automatically upon reboot. And of course gracefully stop before the system halts.
Creating containers at least first time is a manual procedure. People usually start with ‘docker run’ commands then once the result looks okay then will create a docker-compose YAML.
This will not change with pods-compose. You still have to create your pods and containers with ‘podman run’. However you do not have to create any YAML files. The tool will create them for you. Luckily podman CLI syntax is almost the same as docker’s, so it is easy to make progress fast.
The other part is defining which image should be built by pods-compose. Because this information cannot be set in Kubernetes YAML files, you can use pods-compose‘s INI formatted configuration file to define the TAG and the CONTEXT of images. As a result, pods-compose will build all the images for you.
Final words
Let me know if you are still missing some features you would love to see implemented in pods-compose. Also please share if you liked it.
How to deploy pods with Podman when you only need a single-host system and not a complex Kubernetes. Convert your docker-compose services to pods with Podman.
For a single host setup or even for a now officially dead Docker Swarm setup using docker-compose is pretty convenient. But I wanted to get rid of Docker completely and migrate my docker-compose services to pods with Podman.
The reasons why I convert docker-compose services to pods
I have been using Docker’s container technology for about 4-5 years. Both in production and in different labs. Call me an old fashioned but I always managed to set up systems either with pure Docker containers or with docker-compose. However there are things I cannot easily forget.
Here are the top reasons why I decided to convert my docker-compose services to pods with Podman and get rid of Docker completely.
Recurring errors like failing to create many bridged network at once on a clean system, claiming ‘ERROR: Pool overlaps with other one on this address space‘.
Too many fiddling with iptables rules on a system using firewalld. This may not be a problem where a host OS’ only role is to run containers. But there are legit cases where containers may run on a host serving other purposes as well.
Daemon changes causing data losses. I learned the hard way why putting a production SQL database (state-full) into a container is a NO GO.
Inconsistency between recommendations and real life experience. Like “Don’t run more than one process in a single container” – Have you seen GitLab’s official Docker image?
I know some of these reasons may not apply to recent versions of Docker. And I am also aware that some issues are container technology related, so they may apply to Podman containers as well.
The basis of migration
Any migration requires planning and testing. So I started off with my home lab which hosts different systems. In my lab docker-compose took care of composing all services with a single YAML file. The following simplified figure shows a high level overview of the network architecture. Although the picture may indicate, the reverse proxy is not the gateway for the containers.
Figure 1: Network architecture of services orchestrated by docker-compose
Although this system worked pretty well, I have some issues with it.
All networks use a bridge network driver to provide network isolation of service groups. Therefore you have to create many networks, which in turn improves complexity.
The network of Reverse Proxy has to be literally connected to all other bridges to have access to the web servers. However, this way the proxy container could access all exposed ports of all containers on any networks the proxy container is attached to. It provides a bigger attack surface.
Docker makes these networking possible with lots of iptables rules (so as Podman) which are hard to overview and pollute the iptables rules you may already have.
Planning the conversion of docker-compose services to pods
There is a very fundamental difference between Docker and Podman. Podman supports the concepts of pods for instance. This is intentionally very similar to Kubernetes’ pods. Containers in a pod shares the same namespace, like network. So all containers in the same pod looks like sharing the same localhost network. And each pod has its own localhost.
With Docker (Figure 1) there are 5 networks for 9 containers. With Podman by using pods there is only 1 network for 5 pods (Figure 2).
Figure 2: Network architecture of services orchestrated by Podman
Pods provide another layer of isolation I really like. This way containers of any pods could only access ports published by other pods and not the containers themselves.
Challenges with Podman
Migrating to a new technology is not without compromises or challenges. Podman is around for a while and is rapidly evolving. Here are the challenges I had to handle.
Assign IP addresses to pods and not to containers
You can join a container to any networks. But a pod can be only joined to the default network. According to my understanding this will be changed later. This is the reason why I stick to the default network in my setup.
“Most of the attributes that make up the Pod are actually assigned to the “infra” container. Port bindings, cgroup-parent values, and kernel namespaces are all assigned to the “infra” container. “
By default pods will connect to network labeled cni_default_network in libpod.conf. If you join the pod’s containers to other networks, the pod will still have its IP assigned from the default network. However containers will have IPs assigned from the specified networks. As far as I know this symptom looks like a bug.
DNS name resolution between containers and pods
By using the container plugin dnsname you can get name resolution between containers on the same network. However at the moment you cannot have DNS between pods. That feature is under development.
While I am waiting for support of DNS on pod level, I worked around this limitation. I publish the exposed ports of pods to their gateway’s IP address 10.88.0.1 and not the IP address of their infra container itself. As long as the gateway’s IP address static this will work.
Replacing functionalities of docker-compose
The YAML format of docker-compose uses an abstraction above ‘docker run‘ command. However I realized that all the hard work docker-compose did to me was to create networks and assign container’s to them. And of course deploying services.
Networking: Rootfull container networking (CNI)
Luckily describing how a network should look like is not the role of Podman but CNI and its plugins. You can see the layout of the default network in Figure 3.
Figure 3. Low level network topology of Podman’s default network, called ‘podman’.
The published ports are not visible from the outside network unless you set up routes externally. Or you can simply set the IP address of the host for serving published ports (--publish 192.168.122.253:80:80). Effectively it will be another DNAT rule. For my simple case it is enough.
P.s. Do not forget to enable IP forwarding with sysctl to persist across reboots.
P.s. 2: You may need to change the default firewall backend from iptables to firewalld in CNI configuration. So you will have a cleaner overview of your chains and rules.
I really like that you do not have to learn another language to build an image. Use the same Dockerfile format you are already familiar with.
Building an image is not the task of Podman but another tool called buildah. Although you can even use podman build, it will actually use Buildah in the background. Assuming you have your Dockerfile in the current working directory, it will look like this. It can even publish the image to a Docker repository.
Docker-compose made it possible to deploy all services at once with docker-compose up -d. Achieving the same with Podman is possibly by using its support for Kubernetes YAML and some shell scripting.
Set up the pods and containers with all the settings you need with plain podman run commands. Here is a shell script example for gitea.
I did a lot of testing, so I managed to convert all docker-compose services to pods with Podman and with some shell scripting. I still have to figure out how to auto start of pods. There are shareable systemd devices for containers, but I want to test it for pods. See you next time.