The Fridge Magnets

Last Thursday night was the TasLUG OpenStack 4th Birthday meetup. We had some nice nibbly food, some drinks, and four OpenStacky talks:

  • An update from the OpenStack Foundation (presented by me, with slides provided by the Foundation).
  • A talk about the NeCTAR cloud and using the command line tools to work with images, by Scott Bragg.
  • A talk on spinning up instances with Nova and Heat, by Stewart Wilde.
  • A talk by me on Ceph, and how it can be used as the storage backend for an OpenStack cloud.

We also had some posters, stickers and fridge magnets made up. The fridge magnets were remarkably popular. If you weren’t at TasLUG last night, and you want a fridge magnet, first download this image (the full-res one linked to, not the inline one):

Then, go to Vistaprint and place an order for Magnetic Business Cards, using this image. You can get 25 done for about $10, plus shipping.

Finally, I would like to publicly thank the OpenStack Foundation for supporting this event.

Happiness is a Hong Kong SIM

In 1996 Regurgitator released a song called “Kong Foo Sing“. It starts with the line “Happiness is a Kong Foo Sing”, in reference to a particular brand of fortune cookie. But one night last week at the OpenStack Summit, I couldn’t help but think it would be better stated as “Happiness is a Hong Kong SIM”, because I’ve apparently become thoroughly addicted to my data connection.

I was there with five other SUSE engineers who work on SUSE Cloud (our OpenStack offering); Ralf Haferkamp, Michal Jura, Dirk Müller, Vincent Untz and Bernhard Wiedemann. We also had SUSE crew manning a booth which had one of those skill tester machines filled with plush Geekos. I didn’t manage to get one. Apparently my manual dexterity is less well developed than my hacking skills, because I did make ATC thanks to a handful of openSUSE-related commits to TripleO (apologies for the shameless self-aggrandizement, but this is my blog after all).

Given this was my first design summit, I thought it most sensible to first attend “Design Summit 101“, to get a handle on the format. The summit as a whole is split into general sessions and design summit sessions, the former for everyone, the latter intended for developers to map out what needs to happen for the next release. There’s also vendor booths in the main hall.

Roughly speaking, design sessions get a bunch of people together with a moderator/leader and an etherpad up on a projector, which anyone can edit. Then whatever the topic is, is hashed out over the next forty-odd minutes. It’s actually a really good format. The sessions I was in, anyone who wanted to speak or had something to offer, was heard. Everyone was courteous, and very welcoming of input, and of newcomers. Actually, as I remarked on the last day towards the end of Joshua McKenty’sCulture, Code, Community and Conway” talk, everyone is terrifyingly happy. And this is not normal, but it’s a good thing.

As I’ve been doing high availability and storage for the past several years, and have also spent time on SUSE porting and scalability work on Crowbar, I split my time largely between HA, storage and deployment sessions.

On the deployment front, I went to:

On High Availability:

On Storage:

  • Encrypted Block Storage: Technical Walkthrough. This looks pretty neat. Crypto is done on the compute host via dm-crypt, so everything is encrypted in the volume store and even over the wire going to and from the compute host. Still needs work (naturally), notably it currently uses a single static key. Later, it will use Barbican.
  • Swift Drive Workloads and Kinetic Open Storage. Sadly I had to skip out of this one early, but Seagate now have an interesting product which is a disk (and some enclosures) which present disks as key/value stores over ethernet, rather than as block devices. The idea here is you remove a whole lot of layers of the storage stack to try to get better performance.
  • Real World Usage of GlusterFS + OpenStack. Interesting history of the project, what the pieces are, and how they now provide an “all-in-one” storage solution for OpenStack.
  • Ceph: The De Facto Storage Backend for OpenStack. It was inevitable that this would go back-to-back with a GlusterFS presentation. All storage components (Glance, Cinder, object store) unified. Interestingly the
    libvirt_image_type=rbd option lets you directly boot all VMs from Ceph (at least if you’re using KVM). Is it the perfect stack? “Almost” (glance images are still copied around more than they should be, but there’s a patch for this floating around somewhere, also some snapshot integration work is still necessary).
  • Sheepdog: Yet Another All-In-One Storage for Openstack. So everyone is doing all-in-one storage for OpenStack now 😉 I haven’t spent any time with Sheepdog in the past, so this was interesting. It apparently tries to have minimal assumptions about the underlying kernel and filesystem, yet supports thousands of nodes, is purportedly fast and small (<50MB memory footprint) and consists of only 35K lines of C code.
  • Ceph OpenStack Integration Unconference (gathering ideas to improve Ceph integration in OpenStack).

Around all this of course were many interesting discussions, meals and drinks with all sorts of people; my immediate colleagues, my some-time partners in crime, various long-time conference buddies and an assortment of delightful (and occasionally crazy) new acquaintances. If you’ve made it this far and haven’t been to an OpenStack summit yet, try to get to Atlanta in six months or Paris in a year. I don’t know yet whether or not I’ll be there, but I can pretty much guarantee you’ll still have a good time.

A Cosmic Dance in a Little Box

It’s Hack Week again. This time around I decided to look at running TripleO on openSUSE. If you’re not familiar with TripleO, it’s short for OpenStack on OpenStack, i.e. it’s a project to deploy OpenStack clouds on bare metal, using the components of OpenStack itself to do the work. I take some delight in bootstrapping of this nature – I think there’s a nice symmetry to it. Or, possibly, I’m just perverse.

Anyway, onwards. I had a chat to Robert Collins about TripleO while at PyCon AU 2013. He introduced me to diskimage-builder and suggested that making it capable of building openSUSE images would be a good first step. It turned out that making diskimage-builder actually run on openSUSE was probably a better first step, but I managed to get most of that out of the way in a random fit of hackery a couple of months ago. Further testing this week uncovered a few more minor kinks, two of which I’ve fixed here and here. It’s always the cross-distro work that seems to bring out the edge cases.

Then I figured there’s not much point making diskimage-builder create openSUSE images without knowing I can set up some sort of environment to validate them. So I’ve spent large parts of the last couple of days working my way through the TripleO Dev/Test instructions, deploying the default Ubuntu images with my openSUSE 12.3 desktop as VM host. For those following along at home the install-dependencies script doesn’t work on openSUSE (some manual intervention required, which I’ll try to either fix, document, or both, later). Anyway, at some point last night, I had what appeared to be a working seed VM, and a broken undercloud VM which was choking during cloud-init:

Calling http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
Request timed out

Figuring that out, well…  There I was with a seed VM deployed from an image built with some scripts from several git repositories, automatically configured to run even more pieces of OpenStack than I’ve spoken about before, which in turn had attempted to deploy a second VM, which wanted to connect back to the first over a virtual bridge and via the magic of some iptables rules and I was running tcpdump and tailing logs and all the moving parts were just suddenly this GIANT COSMIC DANCE in a tiny little box on my desk on a hill on an island at the bottom of the world.

It was at this point I realised I had probably been sitting at my computer for too long.

It turns out the problem above was due to my_ip being set to an empty string in /etc/nova/nova.conf on the seed VM. Somehow I didn’t have the fix in my local source repo. An additional problem is that libvirt on openSUSE, like Fedora, doesn’t set uri_default="qemu:///system". This causes nova baremetal calls from the seed VM to the host to fail as mentioned in bug #1226310. This bug is apparently fixed, but apparently the fix doesn’t work for me (another thing to investigate), so I went with the workaround of putting uri_default="qemu:///system" in ~/.config/libvirt/libvirt.conf.

So now (after a rather spectacular amount of disk and CPU thrashing) there are three OpenStack clouds running on my desktop PC. No smoke has come out.

  • The seed VM has successfully spun up the “baremetal_0” undercloud VM and deployed OpenStack to it.
  • The undercloud VM has successfully spun up the “baremetal_1” and “baremetal_2” VMs and deployed them as the overcloud control and compute nodes.
  • I have apparently booted a demo VM in the overcloud, i.e. I’ve got a VM running inside a VM, although I haven’t quite managed to ssh into the latter yet (I suspect I’m missing a route or a firewall rule somewhere).

I think I had it right last night. There is a giant cosmic dance being performed in a tiny little box on my desk on a hill on an island at the bottom of the world.

Or, I’ve been sitting at my computer for too long again.

The Pieces of OpenStack

I gave an introductory “WTF is OpenStack” talk at the OpenStack miniconf at PyCon-AU this morning. For the uninitiated, this eventually boiled down to defining OpenStack as:

A framework that allows you to efficiently and dynamically manage virtualization and storage, so your users can request computing power and disk space as they need it.

It was suggested, given recent developments, that I might change this to:

A framework that allows you to efficiently and dynamically manage ALL THE THINGS!

Leaving that aside for the moment though, several people pointed out that my state transition diagram serves to very clearly elucidate how the various pieces of OpenStack interact when someone wants to deploy a VM. Here is is for posterity:

There’s also an etherpad covering the day’s talks at https://etherpad.openstack.org/pyconau-os-hack. Note that the images in the above diagram (aside from the hand-drawn Tim) are courtesy of Martin Loschwitz.