We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.
I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:
> sesdev create k3s === Creating deployment "k3s" with the following configuration === Deployment-wide parameters (applicable to all VMs in deployment): deployment ID: k3s number of VMs: 5 version: k3s OS: tumbleweed public network: 10.20.190.0/24 Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y === Running shell command === vagrant up --no-destroy-on-error --provision Bringing machine 'master' up with 'libvirt' provider... Bringing machine 'node1' up with 'libvirt' provider... Bringing machine 'node2' up with 'libvirt' provider... Bringing machine 'node3' up with 'libvirt' provider... Bringing machine 'node4' up with 'libvirt' provider... [... wait a few minutes (there's lots more log information output here in real life) ...] === Deployment Finished === You can login into the cluster with: $ sesdev ssh k3s
…and then I can do this:
> sesdev ssh k3s Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh Have a lot of fun… master:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 5m16s v1.25.7+k3s1 node2 Ready 2m17s v1.25.7+k3s1 node1 Ready 2m15s v1.25.7+k3s1 node3 Ready 2m16s v1.25.7+k3s1 node4 Ready 2m16s v1.25.7+k3s1 master:~ # kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-79f67d76f8-rpj4d 1/1 Running 0 5m9s kube-system metrics-server-5f9f776df5-rsqhb 1/1 Running 0 5m9s kube-system coredns-597584b69b-xh4p7 1/1 Running 0 5m9s kube-system helm-install-traefik-crd-zz2ld 0/1 Completed 0 5m10s kube-system helm-install-traefik-ckdsr 0/1 Completed 1 5m10s kube-system svclb-traefik-952808e4-5txd7 2/2 Running 0 3m55s kube-system traefik-66c46d954f-pgnv8 1/1 Running 0 3m55s kube-system svclb-traefik-952808e4-dkkp6 2/2 Running 0 2m25s kube-system svclb-traefik-952808e4-7wk6l 2/2 Running 0 2m13s kube-system svclb-traefik-952808e4-chmbx 2/2 Running 0 2m14s kube-system svclb-traefik-952808e4-k7hrw 2/2 Running 0 2m14s
…and then I can make a mess with kubectl apply
, helm
, etc.
One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:
> sesdev create k3s --num-disks=2 > sesdev ssh k3s master:~ # for node in \ $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ; do echo $node ; ssh $node cat /proc/partitions ; done master major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 node3 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node2 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node4 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node1 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc
As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.