In my last post, I wrote about how I taught sesdev (originally a tool for deploying Ceph clusters on virtual machines) to deploy k3s, because I wanted a little sandbox in which I could break learn more about Kubernetes. It’s nice to be able to do a toy deployment locally, on a bunch of VMs, on my own hardware, in my home office, rather than paying to do it on someone else’s computer. Given the k3s thing worked, I figured the next step was to teach sesdev how to deploy Longhorn so I could break that learn more about that too.
Teaching an odd dog new tricks
We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.
I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:
> sesdev create k3s === Creating deployment "k3s" with the following configuration === Deployment-wide parameters (applicable to all VMs in deployment): deployment ID: k3s number of VMs: 5 version: k3s OS: tumbleweed public network: 10.20.190.0/24 Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y === Running shell command === vagrant up --no-destroy-on-error --provision Bringing machine 'master' up with 'libvirt' provider... Bringing machine 'node1' up with 'libvirt' provider... Bringing machine 'node2' up with 'libvirt' provider... Bringing machine 'node3' up with 'libvirt' provider... Bringing machine 'node4' up with 'libvirt' provider... [... wait a few minutes (there's lots more log information output here in real life) ...] === Deployment Finished === You can login into the cluster with: $ sesdev ssh k3s
…and then I can do this:
> sesdev ssh k3s Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh Have a lot of fun… master:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 5m16s v1.25.7+k3s1 node2 Ready 2m17s v1.25.7+k3s1 node1 Ready 2m15s v1.25.7+k3s1 node3 Ready 2m16s v1.25.7+k3s1 node4 Ready 2m16s v1.25.7+k3s1 master:~ # kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-79f67d76f8-rpj4d 1/1 Running 0 5m9s kube-system metrics-server-5f9f776df5-rsqhb 1/1 Running 0 5m9s kube-system coredns-597584b69b-xh4p7 1/1 Running 0 5m9s kube-system helm-install-traefik-crd-zz2ld 0/1 Completed 0 5m10s kube-system helm-install-traefik-ckdsr 0/1 Completed 1 5m10s kube-system svclb-traefik-952808e4-5txd7 2/2 Running 0 3m55s kube-system traefik-66c46d954f-pgnv8 1/1 Running 0 3m55s kube-system svclb-traefik-952808e4-dkkp6 2/2 Running 0 2m25s kube-system svclb-traefik-952808e4-7wk6l 2/2 Running 0 2m13s kube-system svclb-traefik-952808e4-chmbx 2/2 Running 0 2m14s kube-system svclb-traefik-952808e4-k7hrw 2/2 Running 0 2m14s
…and then I can make a mess with kubectl apply
, helm
, etc.
One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:
> sesdev create k3s --num-disks=2 > sesdev ssh k3s master:~ # for node in \ $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ; do echo $node ; ssh $node cat /proc/partitions ; done master major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 node3 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node2 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node4 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node1 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc
As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.
Hack Week 22: An Art Project
Back in 2012, I received a box of eight hundred openSUSE 12.1 promo DVDs, which I then set out to distribute to local Linux users’ groups, tech conferences, other SUSE crew in Australia, and so forth. I didn’t manage to shift all 800 DVDs at the time, and I recently rediscovered the remaining three hundred and eighty four while installing some new shelves. As openSUSE 12.1 went end of life in May 2013, it seemed likely the DVDs were now useless, but I couldn’t bring myself to toss them in landfill. Instead, given last week was Hack Week, I decided to use them for an art project. Here’s the end result:

Making that mosaic was extremely fiddly. It’s possibly the most annoying Hack Week project I’ve ever done, but I’m very happy with the outcome 🙂
Continue readingTANSTAAFL
It’s been a little over a year since our Redflow ZCell battery and Victron Energy inverter/charger kit were installed on our existing 5.94kW solar array. Now that we’re past the Southern Hemisphere spring equinox it seems like an opportune time to review the numbers and try to see exactly how the system has performed over its first full year. For background information on what all the pieces are and what they do, see my earlier post, Go With The Flow.
As we look at the figures for the year, it’s worth keeping in mind what we’re using the battery for, and how we’re doing it. Naturally we’re using it to store PV generated electricity for later use when the sun’s not shining. We are also charging the battery from the grid at certain times so it can be drawn down if necessary during peak times, for example I set up a small overnight charge to ensure there was power for the weekday morning peak, when the sun isn’t really happening yet, but grid power is more than twice as expensive. More recently in the winter months, I experimented with keeping the battery full with scheduled charges during most non-peak times. This involved quite a bit more grid charging, but got us through a couple of three hour grid outages without a hitch during some severe weather in August.
Continue readingAn S3 Storage Experiment
My team at SUSE is working on a new S3-compatible storage solution for Kubernetes, based on Ceph’s RADOS Gateway (RGW), except without any of the RADOS bits. The idea is that you can deploy our s3gw container on top of Longhorn (which provides the underlying replicated storage), and all this is running in your Kubernetes cluster, along with your applications which thus have convenient access to a local S3-compatible object store.
We’ve done this by adding a new storage backend to RGW. The approach we’ve taken is to use SQLite for metadata, with object data stored as files in a regular filesystem. This works quite neatly in a Kubernetes cluster with Longhorn, because Longhorn can provide a persistent volume (think: an ext4 filesystem), on which s3gw can store its SQLite database and object data files. If you’d like to kick the tyres, check out Giuseppe’s deployment tutorial for the 0.2.0 release, but bear in mind that as I’m writing this we’re all the way up to 0.4.0 so some details may have changed.
While s3gw on Longhorn on Kubernetes remains our primary focus for this project, the fact that this thing only needs a filesystem for backing storage means it can be run on top of just about anything. Given “just about anything” includes an old school two node Pacemaker cluster with DRBD for replicated storage, why not give that a try? I kinda like the idea of a good solid highly available S3-compatible storage solution that you could shove into the bottom of a rack somewhere without too much difficulty.
Continue readingHack Week 21: Keeping the Battery Full
As described in some detail in my last post, we have a single 10kWh Redflow ZCell zinc bromine flow battery hooked up to our solar PV via Victron inverter/chargers. This gives us the ability to:
- Store almost all the excess energy we generate locally for later use.
- When the sun isn’t shining, grid charge the battery at off-peak times then draw it down at peak times to save on our electricity bill (peak grid power is slightly more than twice as expensive as off-peak grid power).
- Opportunistically survive grid outages, provided they don’t happen at the wrong time (i.e. when the sun is down and the battery is at 0% state of charge).
By their nature, ZCell flow batteries needs to undergo a maintenance cycle at least every three days, where they are discharged completely for a few hours. That’s why the last point above reads “opportunistically survive grid outages”. With a single ZCell, we can’t use the “minimum state of charge” feature of the Victron kit to always keep some charge in the battery in case of outages, because doing so conflicts with the ZCell maintenance cycles. Once we eventually get a second battery, this problem will go away because the maintenance cycles automatically interleave. In the meantime though, as my project for Hack Week 21, I decided to see if I could somehow automate the Victron scheduled charge configuration based on the ZCell maintenance cycle timing, to always keep the battery as full as possible for as long as possible.
Continue readingGo With The Flow
We installed 5.94kW of solar PV in late 2017, with an ABB PVI-6000TL-OUTD inverter, and also a nice energy efficient Sanden heat pump hot water service to replace our crusty old conventional electric hot water system. In the four years since then we’ve generated something like 24MWh of electricity, but were actually only able to directly use maybe 45% of that – the rest was exported to the grid.
The plan had always been to get batteries once we are able to afford to do so, and this actually happened in August 2021, when we finally got a single 10kWh Redflow ZCell zinc bromine flow battery installed. We went with Redflow for several reasons:
- Unlike every other type of battery, they’re not a horrible fire hazard (in fact, the electrolyte, while corrosive, is actually fire retardant – a good thing when you live in a bushfire prone area).
- They’re modular, so you can just keep adding more of them.
- 100% depth of discharge (i.e. they’re happy to keep being cycled, and can also be left discharged/idle for extended periods).
- All the battery components are able to be recycled at end of life.
- They’re Australian designed and developed, with manufacturing in Thailand.
Our primary reasons for wanting battery storage were to ensure we’re using renewable energy rather than fossil fuels, to try to actually use all our local power generation locally, and to attain some degree of disaster resilience.
Continue readingScope Creep
On December 22, I decided to brew an oatmeal stout (5kg Gladfield ale malt, 250g dark chocolate malt, 250g light chocolate malt, 250g dark crystal malt, 500g rolled oats, 150g rice hulls to stop the mash sticking, 25g Pride of Ringwood hops, Safale US-05 yeast). This all takes a good few hours to do the mash and the boil and everything, so while that was underway I thought it’d be a good opportunity to remove a crappy old cupboard from the laundry, so I could put our nice Miele upright freezer in there, where it’d be closer to the kitchen (the freezer is presently in a room at the other end of the house).
Continue readingI Have No Idea How To Debug This
On my desktop system, I’m running XFCE on openSUSE Tumbleweed. When I leave my desk, I hit the “lock screen” button, the screen goes black, and the monitors go into standby. So far so good. When I come back and mash the keyboard, everything lights up again, the screens go white, and it says:
blank: Shows nothing but a black screen
Name: tserong@HOSTNAME
Password:
Enter password to unlock; select icon to lock
So I type my password, hit ENTER, and I’m back in action. So far so good again. Except… Several times recently, when I’ve come back and mashed the keyboard, the white overlay is gone. I can see all my open windows, my mail client, web browser, terminals, everything, but the screen is still locked. If I type my password and hit ENTER, it unlocks and I can interact again, but this is where it gets really weird. All the windows have moved down a bit on the screen. For example, a terminal that was previously neatly positioned towards the bottom of the screen is now partially off the screen. So “something” crashed – whatever overlay the lock thingy put there is gone? And somehow this affected the position of all my application windows? What in the name of all that is good and holy is going on here?
Update 2020-12-21: I’ve opened boo#1180241 to track this.
Network Maintenance
To my intense amazement, it seems that NBN Co have finally done sufficient capacity expansion on our local fixed wireless tower to actually resolve the evening congestion issues we’ve been having for the past couple of years. Where previously we’d been getting 22-23Mbps during the day and more like 2-3Mbps (or worse) during the evenings, we’re now back to 22-23Mbps all the time, and the status lights on the NTD remain a pleasing green, rather than alternating between green and amber. This is how things were way back at the start, six years ago.