Teaching an odd dog new tricks

We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.

I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:

> sesdev create k3s
=== Creating deployment "k3s" with the following configuration === 
Deployment-wide parameters (applicable to all VMs in deployment):
deployment ID:    k3s
number of VMs:    5
version:          k3s
OS:               tumbleweed
public network:   10.20.190.0/24 
Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y
=== Running shell command ===
vagrant up --no-destroy-on-error --provision
Bringing machine 'master' up with 'libvirt' provider...
Bringing machine 'node1' up with 'libvirt' provider...
Bringing machine 'node2' up with 'libvirt' provider...
Bringing machine 'node3' up with 'libvirt' provider...
Bringing machine 'node4' up with 'libvirt' provider...

[...
  wait a few minutes
  (there's lots more log information output here in real life)
...]

=== Deployment Finished ===
 You can login into the cluster with:
 $ sesdev ssh k3s

…and then I can do this:

> sesdev ssh k3s
Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh
Have a lot of fun…

master:~ # kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   5m16s   v1.25.7+k3s1
node2    Ready                     2m17s   v1.25.7+k3s1
node1    Ready                     2m15s   v1.25.7+k3s1
node3    Ready                     2m16s   v1.25.7+k3s1
node4    Ready                     2m16s   v1.25.7+k3s1 

master:~ # kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-79f67d76f8-rpj4d   1/1     Running     0          5m9s
kube-system   metrics-server-5f9f776df5-rsqhb           1/1     Running     0          5m9s
kube-system   coredns-597584b69b-xh4p7                  1/1     Running     0          5m9s
kube-system   helm-install-traefik-crd-zz2ld            0/1     Completed   0          5m10s
kube-system   helm-install-traefik-ckdsr                0/1     Completed   1          5m10s
kube-system   svclb-traefik-952808e4-5txd7              2/2     Running     0          3m55s
kube-system   traefik-66c46d954f-pgnv8                  1/1     Running     0          3m55s
kube-system   svclb-traefik-952808e4-dkkp6              2/2     Running     0          2m25s
kube-system   svclb-traefik-952808e4-7wk6l              2/2     Running     0          2m13s
kube-system   svclb-traefik-952808e4-chmbx              2/2     Running     0          2m14s
kube-system   svclb-traefik-952808e4-k7hrw              2/2     Running     0          2m14s

…and then I can make a mess with kubectl apply, helm, etc.

One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:

> sesdev create k3s --num-disks=2
> sesdev ssh k3s
master:~ # for node in \
    $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ;
    do echo $node ; ssh $node cat /proc/partitions ; done
master
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
node3
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node2
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node4
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node1
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc

As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.

Hack Week 22: An Art Project

Back in 2012, I received a box of eight hundred openSUSE 12.1 promo DVDs, which I then set out to distribute to local Linux users’ groups, tech conferences, other SUSE crew in Australia, and so forth. I didn’t manage to shift all 800 DVDs at the time, and I recently rediscovered the remaining three hundred and eighty four while installing some new shelves. As openSUSE 12.1 went end of life in May 2013, it seemed likely the DVDs were now useless, but I couldn’t bring myself to toss them in landfill. Instead, given last week was Hack Week, I decided to use them for an art project. Here’s the end result:

Geeko mosaic made of cut up openSUSE DVDs, on a 900mm x 600mm piece of plywood

Making that mosaic was extremely fiddly. It’s possibly the most annoying Hack Week project I’ve ever done, but I’m very happy with the outcome 🙂

Continue reading

I Have No Idea How To Debug This

On my desktop system, I’m running XFCE on openSUSE Tumbleweed. When I leave my desk, I hit the “lock screen” button, the screen goes black, and the monitors go into standby. So far so good. When I come back and mash the keyboard, everything lights up again, the screens go white, and it says:

blank: Shows nothing but a black screen
Name: tserong@HOSTNAME
Password:
Enter password to unlock; select icon to lock

So I type my password, hit ENTER, and I’m back in action. So far so good again. Except… Several times recently, when I’ve come back and mashed the keyboard, the white overlay is gone. I can see all my open windows, my mail client, web browser, terminals, everything, but the screen is still locked. If I type my password and hit ENTER, it unlocks and I can interact again, but this is where it gets really weird. All the windows have moved down a bit on the screen. For example, a terminal that was previously neatly positioned towards the bottom of the screen is now partially off the screen. So “something” crashed – whatever overlay the lock thingy put there is gone? And somehow this affected the position of all my application windows? What in the name of all that is good and holy is going on here?

Update 2020-12-21: I’ve opened boo#1180241 to track this.

Hackweek0x10: Fun in the Sun

We recently had a 5.94KW solar PV system installed – twenty-two 270W panels (14 on the northish side of the house, 8 on the eastish side), with an ABB PVI-6000TL-OUTD inverter. Naturally I want to be able to monitor the system, but this model inverter doesn’t have an inbuilt web server (which, given the state of IoT devices, I’m actually kind of happy about); rather, it has an RS-485 serial interface. ABB sell addon data logger cards for several hundred dollars, but Rick from Affordable Solar Tasmania mentioned he had another client who was doing monitoring with a little Linux box and an RS-485 to USB adapter. As I had a Raspberry Pi 3 handy, I decided to do the same.

Continue reading

Salt and Pepper Squid with Fresh Greens

A few days ago I told Andrew Wafaa I’d write up some notes for him and publish them here. I became hungry contemplating this work, so decided cooking was the first order of business:

Salt and Pepper Squid with Fresh Greens

It turned out reasonably well for a first attempt. Could’ve been crispier, and it was quite salty, but the pepper and chilli definitely worked (I’m pretty sure the chilli was dried bhut jolokia I harvested last summer). But this isn’t a post about food, it’s about some software I’ve packaged for managing Ceph clusters on openSUSE and SUSE Linux Enterprise Server.

Continue reading

Watching Grass Grow

For Hackweek 11 I thought it’d be fun to learn something about creating Android apps. The basic training is pretty straightforward, and the auto-completion (and auto-just-about-everything-else) in Android Studio is excellent. So having created a “hello world” app, and having learned something about activities and application lifecycle, I figured it was time to create something else. Something fun, but something I could reasonably complete in a few days. Given that Android devices are essentially just high res handheld screens with a bit of phone hardware tacked on, it seemed a crime not to write an app that draws something pretty. Continue reading

Happiness is a Hong Kong SIM

In 1996 Regurgitator released a song called “Kong Foo Sing“. It starts with the line “Happiness is a Kong Foo Sing”, in reference to a particular brand of fortune cookie. But one night last week at the OpenStack Summit, I couldn’t help but think it would be better stated as “Happiness is a Hong Kong SIM”, because I’ve apparently become thoroughly addicted to my data connection.

I was there with five other SUSE engineers who work on SUSE Cloud (our OpenStack offering); Ralf Haferkamp, Michal Jura, Dirk Müller, Vincent Untz and Bernhard Wiedemann. We also had SUSE crew manning a booth which had one of those skill tester machines filled with plush Geekos. I didn’t manage to get one. Apparently my manual dexterity is less well developed than my hacking skills, because I did make ATC thanks to a handful of openSUSE-related commits to TripleO (apologies for the shameless self-aggrandizement, but this is my blog after all).

Given this was my first design summit, I thought it most sensible to first attend “Design Summit 101“, to get a handle on the format. The summit as a whole is split into general sessions and design summit sessions, the former for everyone, the latter intended for developers to map out what needs to happen for the next release. There’s also vendor booths in the main hall.

Roughly speaking, design sessions get a bunch of people together with a moderator/leader and an etherpad up on a projector, which anyone can edit. Then whatever the topic is, is hashed out over the next forty-odd minutes. It’s actually a really good format. The sessions I was in, anyone who wanted to speak or had something to offer, was heard. Everyone was courteous, and very welcoming of input, and of newcomers. Actually, as I remarked on the last day towards the end of Joshua McKenty’sCulture, Code, Community and Conway” talk, everyone is terrifyingly happy. And this is not normal, but it’s a good thing.

As I’ve been doing high availability and storage for the past several years, and have also spent time on SUSE porting and scalability work on Crowbar, I split my time largely between HA, storage and deployment sessions.

On the deployment front, I went to:

On High Availability:

On Storage:

  • Encrypted Block Storage: Technical Walkthrough. This looks pretty neat. Crypto is done on the compute host via dm-crypt, so everything is encrypted in the volume store and even over the wire going to and from the compute host. Still needs work (naturally), notably it currently uses a single static key. Later, it will use Barbican.
  • Swift Drive Workloads and Kinetic Open Storage. Sadly I had to skip out of this one early, but Seagate now have an interesting product which is a disk (and some enclosures) which present disks as key/value stores over ethernet, rather than as block devices. The idea here is you remove a whole lot of layers of the storage stack to try to get better performance.
  • Real World Usage of GlusterFS + OpenStack. Interesting history of the project, what the pieces are, and how they now provide an “all-in-one” storage solution for OpenStack.
  • Ceph: The De Facto Storage Backend for OpenStack. It was inevitable that this would go back-to-back with a GlusterFS presentation. All storage components (Glance, Cinder, object store) unified. Interestingly the
    libvirt_image_type=rbd option lets you directly boot all VMs from Ceph (at least if you’re using KVM). Is it the perfect stack? “Almost” (glance images are still copied around more than they should be, but there’s a patch for this floating around somewhere, also some snapshot integration work is still necessary).
  • Sheepdog: Yet Another All-In-One Storage for Openstack. So everyone is doing all-in-one storage for OpenStack now 😉 I haven’t spent any time with Sheepdog in the past, so this was interesting. It apparently tries to have minimal assumptions about the underlying kernel and filesystem, yet supports thousands of nodes, is purportedly fast and small (<50MB memory footprint) and consists of only 35K lines of C code.
  • Ceph OpenStack Integration Unconference (gathering ideas to improve Ceph integration in OpenStack).

Around all this of course were many interesting discussions, meals and drinks with all sorts of people; my immediate colleagues, my some-time partners in crime, various long-time conference buddies and an assortment of delightful (and occasionally crazy) new acquaintances. If you’ve made it this far and haven’t been to an OpenStack summit yet, try to get to Atlanta in six months or Paris in a year. I don’t know yet whether or not I’ll be there, but I can pretty much guarantee you’ll still have a good time.

A Cosmic Dance in a Little Box

It’s Hack Week again. This time around I decided to look at running TripleO on openSUSE. If you’re not familiar with TripleO, it’s short for OpenStack on OpenStack, i.e. it’s a project to deploy OpenStack clouds on bare metal, using the components of OpenStack itself to do the work. I take some delight in bootstrapping of this nature – I think there’s a nice symmetry to it. Or, possibly, I’m just perverse.

Anyway, onwards. I had a chat to Robert Collins about TripleO while at PyCon AU 2013. He introduced me to diskimage-builder and suggested that making it capable of building openSUSE images would be a good first step. It turned out that making diskimage-builder actually run on openSUSE was probably a better first step, but I managed to get most of that out of the way in a random fit of hackery a couple of months ago. Further testing this week uncovered a few more minor kinks, two of which I’ve fixed here and here. It’s always the cross-distro work that seems to bring out the edge cases.

Then I figured there’s not much point making diskimage-builder create openSUSE images without knowing I can set up some sort of environment to validate them. So I’ve spent large parts of the last couple of days working my way through the TripleO Dev/Test instructions, deploying the default Ubuntu images with my openSUSE 12.3 desktop as VM host. For those following along at home the install-dependencies script doesn’t work on openSUSE (some manual intervention required, which I’ll try to either fix, document, or both, later). Anyway, at some point last night, I had what appeared to be a working seed VM, and a broken undercloud VM which was choking during cloud-init:

Calling http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
Request timed out

Figuring that out, well…  There I was with a seed VM deployed from an image built with some scripts from several git repositories, automatically configured to run even more pieces of OpenStack than I’ve spoken about before, which in turn had attempted to deploy a second VM, which wanted to connect back to the first over a virtual bridge and via the magic of some iptables rules and I was running tcpdump and tailing logs and all the moving parts were just suddenly this GIANT COSMIC DANCE in a tiny little box on my desk on a hill on an island at the bottom of the world.

It was at this point I realised I had probably been sitting at my computer for too long.

It turns out the problem above was due to my_ip being set to an empty string in /etc/nova/nova.conf on the seed VM. Somehow I didn’t have the fix in my local source repo. An additional problem is that libvirt on openSUSE, like Fedora, doesn’t set uri_default="qemu:///system". This causes nova baremetal calls from the seed VM to the host to fail as mentioned in bug #1226310. This bug is apparently fixed, but apparently the fix doesn’t work for me (another thing to investigate), so I went with the workaround of putting uri_default="qemu:///system" in ~/.config/libvirt/libvirt.conf.

So now (after a rather spectacular amount of disk and CPU thrashing) there are three OpenStack clouds running on my desktop PC. No smoke has come out.

  • The seed VM has successfully spun up the “baremetal_0” undercloud VM and deployed OpenStack to it.
  • The undercloud VM has successfully spun up the “baremetal_1” and “baremetal_2” VMs and deployed them as the overcloud control and compute nodes.
  • I have apparently booted a demo VM in the overcloud, i.e. I’ve got a VM running inside a VM, although I haven’t quite managed to ssh into the latter yet (I suspect I’m missing a route or a firewall rule somewhere).

I think I had it right last night. There is a giant cosmic dance being performed in a tiny little box on my desk on a hill on an island at the bottom of the world.

Or, I’ve been sitting at my computer for too long again.

One More chef-client Run

Carrying on from my last post, the failed chef-client run came down to the init script in ceph 0.56 not yet knowing how to iterate /var/lib/ceph/{mon,osd,mds} and automatically start the appropriate daemons. This functionality seems to have been introduced in 0.58 or so by commit c8f528a. So I gave it another shot with a build of ceph 0.60.

On each of my ceph nodes, a bit of upgrading and cleanup. Note the choice of ceph 0.60 was mostly arbitrary, I just wanted the latest thing I could find an RPM for in a hurry. Also some of the rm invocations won’t be necessary, depending on what state things are actually in:

# zypper ar -f http://download.opensuse.org/repositories/home:/dalgaaf:/ceph:/extra/openSUSE_12.3/home:dalgaaf:ceph:extra.repo
# zypper ar -f http://gitbuilder.ceph.com/ceph-rpm-opensuse12-x86_64-basic/ref/next/x86_64/ ceph.com-next_openSUSE_12_x86_64
# zypper in ceph-0.60
# kill $(pidof ceph-mon)
# rm /etc/ceph/*
# rm /var/run/ceph/*
# rm -r /var/lib/ceph/*/*

That last gets rid of any half-created mon directories.

I also edited the Ceph environment to only have one mon (one of my colleagues rightly pointed out that you need an odd number of mons, and I had declared two previously, for no good reason). That’s knife environment edit Ceph on my desktop, and set "mon_initial_members": "ceph-0" instead of "ceph-0,ceph-1".

I also had to edit each of the nodes, to add an osd_devices array to each node, and remove the mon role from ceph-1. That’s knife node edit ceph-0.example.com then insert:

  "normal": {
    ...
    "ceph": {
      "osd_devices": [  ]
    }
  ...

Without the osd_devices array defined, the osd recipe fails (“undefined method `each_with_index’ for nil:NilClass”). I was kind of hoping an empty osd_devices array would allow ceph to use the root partition. No such luck, the cookbook really does expect you to be doing a sensible deployment with actual separate devices for your OSDs. Oh, well. I’ll try that another time. For now at least I’ve demonstrated that ceph-0.60 does give you what appears to be a clean mon setup when using the upstream cookbooks on openSUSE 12.3:

knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-15T06:32:13+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-15T06:32:13+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-15T06:32:13+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-15T06:32:13+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-15T06:32:13+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-15T06:32:13+00:00] INFO: Running start handlers
[2013-04-15T06:32:13+00:00] INFO: Start handlers complete.
[2013-04-15T06:32:13+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
[2013-04-15T06:32:13+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] updated content
[2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] mode changed to 644
[2013-04-15T06:32:13+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-15T06:32:13+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
creating /var/lib/ceph/tmp/ceph-ceph-0.mon.keyring
added entity mon. auth auth(auid = 18446744073709551615 key=AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g== with 0 caps)
ceph-mon: mon.noname-a 192.168.4.118:6789/0 is local, renaming to mon.ceph-0
ceph-mon: set fsid to f80aba97-26c5-4aa3-971e-09c5a3afa32f
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-0 for mon.ceph-0
[2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] ran successfully
[2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] sending start action to service[ceph_mon] (immediate)
[2013-04-15T06:32:14+00:00] INFO: Processing service[ceph_mon] action start (ceph::mon line 23)
[2013-04-15T06:32:15+00:00] INFO: service[ceph_mon] started
[2013-04-15T06:32:15+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 64)
mon already active; ignoring bootstrap hint

[2013-04-15T06:32:16+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-15T06:32:16+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 79)
2013-04-15 06:32:16.872040 7fca8e297780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-15 06:32:16.872042 7fca8e297780 -1 unable to authenticate as client.admin
2013-04-15 06:32:16.872400 7fca8e297780 -1 ceph_tool_common_init failed.
[2013-04-15T06:32:18+00:00] INFO: ruby_block[get osd-bootstrap keyring] called
[2013-04-15T06:32:18+00:00] INFO: Processing package[gdisk] action upgrade (ceph::osd line 37)
[2013-04-15T06:32:27+00:00] INFO: package[gdisk] upgraded from uninstalled to 
[2013-04-15T06:32:27+00:00] INFO: Processing service[ceph_osd] action nothing (ceph::osd line 48)
[2013-04-15T06:32:27+00:00] INFO: Processing directory[/var/lib/ceph/bootstrap-osd] action create (ceph::osd line 67)
[2013-04-15T06:32:27+00:00] INFO: Processing file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] action create (ceph::osd line 76)
[2013-04-15T06:32:27+00:00] INFO: entered create
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] owner changed to 0m
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] group changed to 0
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] mode changed to 440
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] created file /var/lib/ceph/bootstrap-osd/ceph.keyring.raw
[2013-04-15T06:32:27+00:00] INFO: Processing execute[format as keyring] action run (ceph::osd line 83)
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
added entity client.bootstrap-osd auth auth(auid = 18446744073709551615 key=AQAOl2tR0M4bMRAAatSlUh2KP9hGBBAP6u5AUA== with 0 caps)
[2013-04-15T06:32:27+00:00] INFO: execute[format as keyring] ran successfully
[2013-04-15T06:32:28+00:00] INFO: Chef Run complete in 14.479108446 seconds
[2013-04-15T06:32:28+00:00] INFO: Running report handlers
[2013-04-15T06:32:28+00:00] INFO: Report handlers complete

Witness:

ceph-0:~ # rcceph status
=== mon.ceph-0 === 
mon.ceph-0: running {"version":"0.60-468-g98de67d"}

On the note of building an easy-to-deploy Ceph appliance, assuming you’re not using Chef and just want something to play with, I reckon the way to go is use config pretty similar to what would be deployed by this Chef cookbook, i.e. an absolute minimal /etc/ceph/ceph.conf, specifying nothing other than initial mons, then use the various Ceph CLI tools to create mons and osds on each node and just rely on the init script in Ceph >= 0.58 to do the right thing with what it finds (having to explicitly specify each mon, osd and mds in the Ceph config by name always bugged me). Bonus points for using csync2 to propagate /etc/ceph/ceph.conf across the cluster.