Install Jellyfin server with Ansible to set up your own media streaming service in an automated manner. Download the role to install your own media server. This post does not explain Jellyfin itself, it is only about to install it with Ansible.
Category: Automation
Running Home Assistant on Kubernetes
Running Home Assistant on Kubernetes. It is about time to put your IoT devices under control. Everything runs off the cloud on my on-premise Kubernetes. Clean and secure.
This blog will not cover what Home Assistant is or how Kubernetes works. I assume you already have that covered.
Separate network traffic for IoT devices
I created a separate VLAN just for IoT devices on my OpenWRT. Some things to notice about this VLAN.
- A separate AP exists for devices communicating over WiFi.
- This VLAN does have access neither to other networks nor to the Internet. So no IoT device will be able to phone home. And most probably you cannot use the official Apps.
- Kubernets has a dedicated Nginx Ingress Controller listening in this VLAN, opposite to the Traefik Ingress Controller which listens in the default network.
- The host running Kubernetes does have a network connection in both VLANs (optional).
If you would like to create VLANs but don’t know how to start I highly recommend watching Mark’s great video about VLANs in OpenWrt 21. It will give you a head start.
Use K3s lightweight Kubernetes distribution for IoT
Although there are many Kubernetes distributions to choose from, I have picked K3s.
It is a lightweight Kubernetes and it is proven to work on a bunch of inexpensive unavailable Raspberry Pi4s running in a basement just like Jeff’s Pi Dramble. Make sure to check him out, it is unbelievable. 🙂
The Deployment problem of Home Assistant
There are a couple of supported ways of installing Home Assistant. Unfortunately Kubernetes is not among them. I do not blame them. Probably it simply does not worth the effort to support it, because people using Home Assistant is already a niche market, even more people running Kubernetes at home.
There are 4 options to install it.
- Home Assistant Operating System – their own operating system based on Docker (no go)
- Home Assistant Container – A container image containing only the core of Home Assistant (we’ll use this)
- Home Assistant Supervised – “This way of running Home Assistant will require the most of you. It also has strict requirements you need to follow.” (still requires Docker’s socket – no go)
- Home Assistant Core – Same as Home Assistant Container but without the container (no go)
The biggest drawback of using their container image is that there will be no “Addons store” feature. And you will need to find the necessary howtos in forums, github issues, etc.
So I will rely on their container image but what about deployment descriptions?
I cannot really use their docker-compose.yml in Kubernetes for other purposes than as a base for creating my own manifests.
There were Helm charts available for some time but at the time of writing this article, none of them are maintained anymore but at least they are better source for creating something new than using the docker-compose YMLs. I decided to roll my own version of manifest and keeping things as simple as possible.
Installing Home Assistant with Kustomize
So I have the official container image but I will need services other than the plain Core of Home Assistant. I need the following functionalities.
- Home Assistant itself. You can grab the Kubernetes manifest from abalage/home-assistant-k8s.
- MQTT support (Mosquitto) which is a frequently required integration interface for lots of IoT devices. You can download the Kubernetes manifest from abalage/mosquitto-mqtt-k8s.
- Nginx Ingress Controller to expose HTTP(s) and MQTT’s TCP port 1883 (later one is not HTTP protocol)
- MetalLB for acquiring an externally addressable IP address for Home Assistant’s service (LoadBalancer)
Like I mentioned before the Helm charts are outdated and I needed some customization anyway, hence I use Kustomize to get all the dependencies properly configured and patched to fit my network environment.
You do not need to individually download the manifests. Just check out the GitHub project abalage/Home-Assistant-on-Kubernetes which contains everything you need to start.
Make sure you edit `kustomization.yaml` to fit to your environment. Check out the README for details.
Now you can run your own Home Assistant on Kubernetes.
What do you think? Isn’t it awesome? 🙂
Deploy Elasticsearch stack with podman and Ansible
Deploy Elasticsearch stack with podman and Ansible. Halfway on the road towards complete automation. But without the necessity of a complex orchestration tool. Somewhere between pets and cattles.
There is an existing Ansible collection containers.podman to handle podman pods and containers. Although Elastic the company already maintains an Ansible playbook for Elasticsearch, it uses regular Linux packages and not container images. Meet abalage.elasticstack_podman a collection of Ansible roles to deploy and handle an Elasticsearch cluster and its components like Kibana, Filebeat, Metricbeat and Logstash.
You can download it from galaxy.ansible.com or from github.
Requirements
Any operating system which supports a relatively recent version of podman (>=3.0) is required. Beware that CentOS 7 is not among them. The playbooks were tested on AlmaLinux 8.4 and OpenSUSE Leap 15.3. However on OpenSUSE you need to use a third party repository (Virtualization_containers).
The collection does not contain a reverse proxy for Kibana. You can use either Traefik of NGINX. The Kibana container is already provides labels for Traefik.
Features
I implemented the following features.
- It deploys an Elasticsearch cluster. Works with single node deployments. However you can build a cluster of multiple nodes as well. You can even run multiple nodes on the same host OS.
- Use Kibana for visualization.
- Metricbeat automatically collects and stores all components metrics in the cluster. Use Kibana’s Stack Monitoring app to access the metrics.
- Filebeat sends the components logs to Elasticsearch. Use Kibana’s Logs app to access the logs.
- Optionally you can set up Logstash containers too. Although there are not many pipeline templates available.
- Automatically populates built-in and custom users, passwords and roles. It does not support AD integration yet.
- Pods and containers are automatically started upon reboot by using systemd units.
- Supports host firewalld. Disabled by default.
- Works best with host networking. Support for bridge networking is best effort and has scalability limitations. It does not support rootless networking at the moment.
Usage of the collection
I expect you already have an Ansible control node and several managed hosts. The collection was developed and tested with Ansible 2.9.
Create your deployment playbook
A playbook defines which play, roles and tasks of the collection are executed on which hosts. There is an existing playbook called elk-podman-deployment you can use. For example there is an example playbook in the repository too.
Create your deployment inventory
The deployment inventory describes how your cluster looks like. You can use the variables from the role’s defaults to create an inventory form scratch. However I provide a example inventory that you can customize.
Do not forget to encrypt sensitive data with ansible-vault.
I highly recommend to create proper X.509 certificates for TLS for security reasons. Make sure to follow the Securing Elasticsearch cluster guide to create such certificates.
Run the playbook
Once the inventory is complete, you can run the playbook like this tot deploy Elasticsearch stack with podman and Ansible.
$ ansible-playbook -i /path/to/production.ini playbook.yml --vault-password-file /path/to/vault-secret
It is a good idea to run in check mode on the first run to see whether is there anything missing from the inventory.
Reverse proxy
The collection itself does not provide any reverse proxy.
You can use any kind of reverse proxy to provide access to Kibana or any other components. I suggest to use Traefik for auto-discovery.
Conclusion
Developing all these roles and task were fun. I could learn a lot about Ansible. Therefore I can recommend this collection to anyone who would need such a setup but without the requirement of having a complex orchestration platform. I am aware of production systems deployed by this playbook.
However. I think this approach on the long run is not feasible. The architecture can grow to became uncontrolled pretty easily, unless someone constantly maintains the collection and provides support.
I could think of better alternatives like incorporating the container parts into Elastic’s official Ansible playbook. So the support would come from the vendor and not from the community. It might also worth to try some Edge/IoT oriented Kubernetes distribution like K3s which is lightweight but also supports Helm charts or better Operators.
What do you think?
Install OpenSUSE MicroOS in KVM with Ignition
Install OpenSUSE MicroOS in KVM with Ignition. A step-by-step guide to provision container specific OS instances really fast.
About OpenSUSE MicroOS
I needed a container specific OS since I converted my docker-compose services to pods with Podman. Fedora CoreOS looks promising. However I have been using OpenSUSE for years, so it was convenient for me to try MicroOS which is derived from Tumbleweed.
These are the features which I like the most.
- Automatic transactional updates. If you do not like the result, then you can switch back to the previous snapshot.
- Up to date software versions. Especially for software like Podman which improves rapidly.
- A minimal environment specially designed for container workloads. It has greatly reduced attack surfaces and improved performance.
Install OpenSUSE MicroOS in KVM with Ignition
There are documentations about MicroOS, but I could not find a complete guide about how to install OpenSUSE MicroOS in KVM. Also its Ignition guide directly redirects to CoreOS’s git repo for documentation. The information is there but putting it together takes time.
As I already managed to do it then why not to share it? 🙂
In my guide I will use virsh (libvirt) and virt-install (virt-manager) to provision headless MicroOS VMs based on the downloadable KVM images they made available. Both tools are higher lever APIs to KVM / QEMU.
Installing libvirt to make using QEMU easy
OpenSUSE has a package pattern for turning your OS into a virtualization host. Following the Virtualization Guide will definitely help. But if you do not want to read it all then just run the following command.
zypper in -t pattern kvm_server kvm_tools
Should you want to read more about managing virtual machines with libvirt then there is a documentation for that too. I will not go into details this time.
Downloading and verify installation media of MicroOS
Once the tools are ready then download the installation media and verify the checksum and signature.
# cd /var/lib/libvirt/images
# curl -LO http://download.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2
# curl -LO http://download.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2.sha256
# curl -LO http://download.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2.sha256.asc
Note that sha256 checksum and signature was made for a snapshot whose name is different from the file we downloaded, though the content is the same. Probably the other files on their webserver are just symbolic links.
# sha256sum openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2
# gpg --search-keys B88B2FD43DBDC284
# gpg --recv-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284
# gpg --verify openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2.sha256.asc
Creating Ignition configuration for VM
Ignition expect its configuration to be in JSON. However one just not create a JSON file by hand. But creates a YML file and convert it with semantic checks (and some boilerplate) to JSON by using CoreOS’s fcct. Here is my example. It is pretty straight forward.
variant: fcos
version: 1.0.0
passwd:
users:
- name: root
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHj2D8PAxz0MKV9UJ8dxavlVzdAvMBrfGp38sj4q/aRbkcuYVNHAQh+xXHI0VcPEtu9rqZbvqfmQt0DFhsdf938W6r3y6mLp4+6KIDgb4Jj2B3zlzBIF0haAFi/GZAp4dh4uuhHsVvZGqsdqCglxUnIPb+i8IyYA8GGU+3IOgRfjjtpMfDJcWZTzGm56yDsBYORX3EckkGcWTN4/oW0SKWoO9zf/887/CvVZF/0V7corEAdMyTCiSSqqUjIDLAZpCMU4czadZop7cvVjGT6WLmyGDuTBruvjsMwxYA/OMAZrUuOEoAW0bf/QZRZ4tO7ku+o0oqwca5uwVbuouAFovJ root@example
password_hash: "$1$salt$qJH7.N4xYta3aEG/dfqo/0"
storage:
files:
- path: /etc/sysconfig/network/ifcfg-eth0
mode: 0600
overwrite: true
contents:
inline: |
BOOTPROTO='static'
STARTMODE='auto'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.0.10/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR=''
ZONE=public
- path: /etc/sysconfig/network/routes
mode: 0644
overwrite: true
contents:
inline: |
default 192.168.0.254 - -
- path: /etc/hostname
mode: 0644
overwrite: true
contents:
inline: |
example.com
Please note that on OpenSUSE you cannot just provide static DNS information by overwriting /etc/resolv.conf. Because the content of /etc/resolv.conf is managed by netconfig. And it configuration file bigger than optimal to include it in a YML file. Though you can configure it manually after the first run.
Once you are ready, put the contents into config.fcc and convert it to JSON by using fcct.
# podman run -i --rm quay.io/coreos/fcct:release --pretty --strict < config.fcc > config.ign
Create a KVM host with virt-install
The Ignition file can be specified via QEMU command line. Adjust the specification of the VM according to your needs.
# virt-install --import --connect qemu:///system --name example \
--ram 1024 --vcpus 1 \
--disk size=20,backing_store=/var/lib/libvirt/images/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2,bus=virtio \
--os-variant=opensusetumbleweed \
--network bridge=br0,model=virtio \
--noautoconsole \
--graphics spice,listen=127.0.0.1 \
--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=/path/to/config.ign"
No you have just installed OpenSUSE MicroOS in KVM with Ignition. Enjoy.
Managing VM with Virsh
Virsh is another tool to manage your VMs. Here I provide some examples I frequently use. They could be useful.
Console access
You can attach to the serial console of the VM with the following command.
# virsh console example
Disconnecting from Virsh console is possible with SHIFT+5.
Remote console access
Remote access to the console is possible with Spice. If you are not in production then you can easily access the remote Spice port without TLS via SSH port forward.
Delete virtual machines
This can be useful when you are not satisfied with the result and want to start over from scratch. Storage files are not automatically deleted.
# virsh dumpxml --domain example | grep 'source file'
<source file='/var/lib/libvirt/images/example.qcow2'/>
<source file='/var/lib/libvirt/images/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2'/>
# virsh destroy example
# virsh undefine example
# rm -f /var/lib/libvirt/images/example.qcow2
I hope you will enjoy your shiny new container host. 🙂