The wrong way to debug CrashLoopBackOff

Last week I had occasion to test deploying ceph-csi on a k3s cluster, so that Kubernetes workloads could access block storage provided by an external Ceph cluster. I went with the upstream Ceph documentation, because assuming everything worked it’d then be really easy for me to say to others “just go do this”.

Everything did not work.

Continue reading

Teaching an odd dog new tricks

We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.

I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:

> sesdev create k3s
=== Creating deployment "k3s" with the following configuration === 
Deployment-wide parameters (applicable to all VMs in deployment):
deployment ID:    k3s
number of VMs:    5
version:          k3s
OS:               tumbleweed
public network:   10.20.190.0/24 
Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y
=== Running shell command ===
vagrant up --no-destroy-on-error --provision
Bringing machine 'master' up with 'libvirt' provider...
Bringing machine 'node1' up with 'libvirt' provider...
Bringing machine 'node2' up with 'libvirt' provider...
Bringing machine 'node3' up with 'libvirt' provider...
Bringing machine 'node4' up with 'libvirt' provider...

[...
  wait a few minutes
  (there's lots more log information output here in real life)
...]

=== Deployment Finished ===
 You can login into the cluster with:
 $ sesdev ssh k3s

…and then I can do this:

> sesdev ssh k3s
Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh
Have a lot of fun…

master:~ # kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   5m16s   v1.25.7+k3s1
node2    Ready                     2m17s   v1.25.7+k3s1
node1    Ready                     2m15s   v1.25.7+k3s1
node3    Ready                     2m16s   v1.25.7+k3s1
node4    Ready                     2m16s   v1.25.7+k3s1 

master:~ # kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-79f67d76f8-rpj4d   1/1     Running     0          5m9s
kube-system   metrics-server-5f9f776df5-rsqhb           1/1     Running     0          5m9s
kube-system   coredns-597584b69b-xh4p7                  1/1     Running     0          5m9s
kube-system   helm-install-traefik-crd-zz2ld            0/1     Completed   0          5m10s
kube-system   helm-install-traefik-ckdsr                0/1     Completed   1          5m10s
kube-system   svclb-traefik-952808e4-5txd7              2/2     Running     0          3m55s
kube-system   traefik-66c46d954f-pgnv8                  1/1     Running     0          3m55s
kube-system   svclb-traefik-952808e4-dkkp6              2/2     Running     0          2m25s
kube-system   svclb-traefik-952808e4-7wk6l              2/2     Running     0          2m13s
kube-system   svclb-traefik-952808e4-chmbx              2/2     Running     0          2m14s
kube-system   svclb-traefik-952808e4-k7hrw              2/2     Running     0          2m14s

…and then I can make a mess with kubectl apply, helm, etc.

One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:

> sesdev create k3s --num-disks=2
> sesdev ssh k3s
master:~ # for node in \
    $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ;
    do echo $node ; ssh $node cat /proc/partitions ; done
master
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
node3
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node2
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node4
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node1
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc

As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.

An S3 Storage Experiment

My team at SUSE is working on a new S3-compatible storage solution for Kubernetes, based on Ceph’s RADOS Gateway (RGW), except without any of the RADOS bits. The idea is that you can deploy our s3gw container on top of Longhorn (which provides the underlying replicated storage), and all this is running in your Kubernetes cluster, along with your applications which thus have convenient access to a local S3-compatible object store.

We’ve done this by adding a new storage backend to RGW. The approach we’ve taken is to use SQLite for metadata, with object data stored as files in a regular filesystem. This works quite neatly in a Kubernetes cluster with Longhorn, because Longhorn can provide a persistent volume (think: an ext4 filesystem), on which s3gw can store its SQLite database and object data files. If you’d like to kick the tyres, check out Giuseppe’s deployment tutorial for the 0.2.0 release, but bear in mind that as I’m writing this we’re all the way up to 0.4.0 so some details may have changed.

While s3gw on Longhorn on Kubernetes remains our primary focus for this project, the fact that this thing only needs a filesystem for backing storage means it can be run on top of just about anything. Given “just about anything” includes an old school two node Pacemaker cluster with DRBD for replicated storage, why not give that a try? I kinda like the idea of a good solid highly available S3-compatible storage solution that you could shove into the bottom of a rack somewhere without too much difficulty.

Continue reading

Distributed Storage is Easier Now: Usability from Ceph Luminous to Nautilus

On January 21, 2019 I presented Distributed Storage is Easier Now: Usability from Ceph Luminous to Nautilus at the linux.conf.au 2019 Systems Administration Miniconf. Thanks to the incredible Next Day Video crew, the video was online the next day, and you can watch it here:

If you’d rather read than watch, the meat of the talk follows, but before we get to that I have two important announcements:

  1. Cephalocon 2019 is coming up on May 19-20, in Barcelona, Spain. The CFP is open until Friday February 1, so time is rapidly running out for submissions. Get onto it.
  2. If you’re able to make it to FOSDEM on February 2-3, there’s a whole Software Defined Storage Developer Room thing going on, with loads of excellent content including What’s new in Ceph Nautilus – project status update and preview of the coming release and Managing and Monitoring Ceph with the Ceph Manager Dashboard, which will cover rather more than I was able to here.

Continue reading

Hello Salty Goodness

Anyone who’s ever deployed Ceph presumably knows about ceph-deploy. It’s right there in the Deployment chapter of the upstream docs, and it’s pretty easy to use to get a toy test cluster up and running. For any decent sized cluster though, ceph-deploy rapidly becomes cumbersome… As just one example, do you really want to have to `ceph-deploy osd prepare` every disk? For larger production clusters it’s almost certainly better to use a fully-fledged configuration management tool, such as Salt, which is what this post is about.

Continue reading

A Gentle Introduction to Ceph

A Gentle Introduction to CephI told a little story about Ceph today in the sysadmin miniconf at linux.conf.au 2016. Simon and Ewen have already put a copy of my slides up on the miniconf web site, but for bonus posterity you can get them from here also. For some more detail, check out Lars’ SUSE Enterprise Storage – Design and Performance for Ceph slides from SUSECon 2015 and Software-defined Storage: Sizing and Performance.

Update (2016-02-07): The video of the talk can be found here.

Salt and Pepper Squid with Fresh Greens

A few days ago I told Andrew Wafaa I’d write up some notes for him and publish them here. I became hungry contemplating this work, so decided cooking was the first order of business:

Salt and Pepper Squid with Fresh Greens

It turned out reasonably well for a first attempt. Could’ve been crispier, and it was quite salty, but the pepper and chilli definitely worked (I’m pretty sure the chilli was dried bhut jolokia I harvested last summer). But this isn’t a post about food, it’s about some software I’ve packaged for managing Ceph clusters on openSUSE and SUSE Linux Enterprise Server.

Continue reading

One More chef-client Run

Carrying on from my last post, the failed chef-client run came down to the init script in ceph 0.56 not yet knowing how to iterate /var/lib/ceph/{mon,osd,mds} and automatically start the appropriate daemons. This functionality seems to have been introduced in 0.58 or so by commit c8f528a. So I gave it another shot with a build of ceph 0.60.

On each of my ceph nodes, a bit of upgrading and cleanup. Note the choice of ceph 0.60 was mostly arbitrary, I just wanted the latest thing I could find an RPM for in a hurry. Also some of the rm invocations won’t be necessary, depending on what state things are actually in:

# zypper ar -f http://download.opensuse.org/repositories/home:/dalgaaf:/ceph:/extra/openSUSE_12.3/home:dalgaaf:ceph:extra.repo
# zypper ar -f http://gitbuilder.ceph.com/ceph-rpm-opensuse12-x86_64-basic/ref/next/x86_64/ ceph.com-next_openSUSE_12_x86_64
# zypper in ceph-0.60
# kill $(pidof ceph-mon)
# rm /etc/ceph/*
# rm /var/run/ceph/*
# rm -r /var/lib/ceph/*/*

That last gets rid of any half-created mon directories.

I also edited the Ceph environment to only have one mon (one of my colleagues rightly pointed out that you need an odd number of mons, and I had declared two previously, for no good reason). That’s knife environment edit Ceph on my desktop, and set "mon_initial_members": "ceph-0" instead of "ceph-0,ceph-1".

I also had to edit each of the nodes, to add an osd_devices array to each node, and remove the mon role from ceph-1. That’s knife node edit ceph-0.example.com then insert:

  "normal": {
    ...
    "ceph": {
      "osd_devices": [  ]
    }
  ...

Without the osd_devices array defined, the osd recipe fails (“undefined method `each_with_index’ for nil:NilClass”). I was kind of hoping an empty osd_devices array would allow ceph to use the root partition. No such luck, the cookbook really does expect you to be doing a sensible deployment with actual separate devices for your OSDs. Oh, well. I’ll try that another time. For now at least I’ve demonstrated that ceph-0.60 does give you what appears to be a clean mon setup when using the upstream cookbooks on openSUSE 12.3:

knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-15T06:32:13+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-15T06:32:13+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-15T06:32:13+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-15T06:32:13+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-15T06:32:13+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-15T06:32:13+00:00] INFO: Running start handlers
[2013-04-15T06:32:13+00:00] INFO: Start handlers complete.
[2013-04-15T06:32:13+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
[2013-04-15T06:32:13+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] updated content
[2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] mode changed to 644
[2013-04-15T06:32:13+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-15T06:32:13+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
creating /var/lib/ceph/tmp/ceph-ceph-0.mon.keyring
added entity mon. auth auth(auid = 18446744073709551615 key=AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g== with 0 caps)
ceph-mon: mon.noname-a 192.168.4.118:6789/0 is local, renaming to mon.ceph-0
ceph-mon: set fsid to f80aba97-26c5-4aa3-971e-09c5a3afa32f
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-0 for mon.ceph-0
[2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] ran successfully
[2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] sending start action to service[ceph_mon] (immediate)
[2013-04-15T06:32:14+00:00] INFO: Processing service[ceph_mon] action start (ceph::mon line 23)
[2013-04-15T06:32:15+00:00] INFO: service[ceph_mon] started
[2013-04-15T06:32:15+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 64)
mon already active; ignoring bootstrap hint

[2013-04-15T06:32:16+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-15T06:32:16+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 79)
2013-04-15 06:32:16.872040 7fca8e297780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-15 06:32:16.872042 7fca8e297780 -1 unable to authenticate as client.admin
2013-04-15 06:32:16.872400 7fca8e297780 -1 ceph_tool_common_init failed.
[2013-04-15T06:32:18+00:00] INFO: ruby_block[get osd-bootstrap keyring] called
[2013-04-15T06:32:18+00:00] INFO: Processing package[gdisk] action upgrade (ceph::osd line 37)
[2013-04-15T06:32:27+00:00] INFO: package[gdisk] upgraded from uninstalled to 
[2013-04-15T06:32:27+00:00] INFO: Processing service[ceph_osd] action nothing (ceph::osd line 48)
[2013-04-15T06:32:27+00:00] INFO: Processing directory[/var/lib/ceph/bootstrap-osd] action create (ceph::osd line 67)
[2013-04-15T06:32:27+00:00] INFO: Processing file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] action create (ceph::osd line 76)
[2013-04-15T06:32:27+00:00] INFO: entered create
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] owner changed to 0m
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] group changed to 0
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] mode changed to 440
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] created file /var/lib/ceph/bootstrap-osd/ceph.keyring.raw
[2013-04-15T06:32:27+00:00] INFO: Processing execute[format as keyring] action run (ceph::osd line 83)
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
added entity client.bootstrap-osd auth auth(auid = 18446744073709551615 key=AQAOl2tR0M4bMRAAatSlUh2KP9hGBBAP6u5AUA== with 0 caps)
[2013-04-15T06:32:27+00:00] INFO: execute[format as keyring] ran successfully
[2013-04-15T06:32:28+00:00] INFO: Chef Run complete in 14.479108446 seconds
[2013-04-15T06:32:28+00:00] INFO: Running report handlers
[2013-04-15T06:32:28+00:00] INFO: Report handlers complete

Witness:

ceph-0:~ # rcceph status
=== mon.ceph-0 === 
mon.ceph-0: running {"version":"0.60-468-g98de67d"}

On the note of building an easy-to-deploy Ceph appliance, assuming you’re not using Chef and just want something to play with, I reckon the way to go is use config pretty similar to what would be deployed by this Chef cookbook, i.e. an absolute minimal /etc/ceph/ceph.conf, specifying nothing other than initial mons, then use the various Ceph CLI tools to create mons and osds on each node and just rely on the init script in Ceph >= 0.58 to do the right thing with what it finds (having to explicitly specify each mon, osd and mds in the Ceph config by name always bugged me). Bonus points for using csync2 to propagate /etc/ceph/ceph.conf across the cluster.

The Ceph Chef Experiment

Sometimes it’s most interesting to just dive in and see what breaks. There’s a Chef cookbook for Ceph on github which seems rather more recently developed than the one in SUSE-Cloud/barclamp-ceph, and seeing as its use is documented in the Ceph manual, I reckon that’s the one I want to be using. Of course, the README says “Tested as working: Ubuntu Precise (12.04)”, and I’m using openSUSE 12.3…

First things first, need a Chef server, so I installed openSUSE 12.3 on a VM, then installed Chef 10 on that, roughly following the manual installation instructions. Note for those following along at home – sometimes the blocks I’ve copied here are just commands, sometimes they include command output as well. You’ll figure it out 🙂

# zypper ar -f http://download.opensuse.org/repositories/systemsmanagement:/chef:/10/openSUSE_12.3/systemsmanagement:chef:10.repo
# zypper in rubygem-chef-server
# chkconfig couchdb on
# rccouchdb start
# chkconfig rabbitmq-server on
# rcrabbitmq-server start
# rabbitmqctl add_vhost /chef
# rabbitmqctl add_user chef testing
# rabbitmqctl set_permissions -p /chef chef ".*" ".*" ".*"
# for service in solr expander server server-webui; do
      chkconfig chef-$service on
      rcchef-$service start
  done

I didn’t bother editing /etc/chef/server.rb, the config as shipped works fine (not that the AMQP password is very secure, mind). The only catch is the web UI didn’t start. IIRC this is due to /etc/chef/webui.pem not existing yet (chef-server creates it, but this doesn’t finish until later).

Then configured knife:

# knife configure -i
WARNING: No knife configuration file found
Where should I put the config file? [/root/.chef/knife.rb]
Please enter the chef server URL: [http://os-chef.example.com:4000]
Please enter a clientname for the new client: [root]
Please enter the existing admin clientname: [chef-webui]
Please enter the location of the existing admin client's private key: [/etc/chef/webui.pem]
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key: [/etc/chef/validation.pem]
Please enter the path to a chef repository (or leave blank):
Creating initial API user...
Created client[root]
Configuration file written to /root/.chef/knife.rb

And make a client for me:

# knife client create tserong -d -a -f /tmp/tserong.pem
Created client[tserong]

Then set up my desktop as a Chef workstation (roughly following these docs, and again pulling Chef from systemsmanagement:chef:10 on OBS):

# sudo zypper in rubygem-chef
# cd ~
# git clone git://github.com/opscode/chef-repo.git
# cd chef-repo
# mkdir -p ~/.chef
# scp root@os-chef:/etc/chef/validation.pem ~/.chef/
# scp root@os-chef:/tmp/tserong.pem ~/.chef/
# knife configure
WARNING: No knife configuration file found
Where should I put the config file? [/home/tserong/.chef/knife.rb]
Please enter the chef server URL: [http://desktop.example.com:4000] http://os-chef.example.com:4000
Please enter an existing username or clientname for the API: [tserong]
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key: [/etc/chef/validation.pem] /home/tserong/.chef/validation.pem
Please enter the path to a chef repository (or leave blank): /home/tserong/chef-repo
[...]
Configuration file written to /home/tserong/.chef/knife.rb

Make sure it works:

# knife client list
chef-validator
chef-webui
root
tserong

Grab the cookbooks and upload them to the Chef server. The Ceph cookbook claims to depend on apache and apt, although presumably the former is only necessary for RADOSGW, and the latter for Debian-based systems. Anyway:

# cd ~/chef-repo
# git submodule add git@github.com:opscode-cookbooks/apache2.git cookbooks/apache2
# git submodule add git@github.com:opscode-cookbooks/apt.git cookbooks/apt
# git submodule add git@github.com:ceph/ceph-cookbooks.git cookbooks/ceph
# knife cookbook upload apache2
# knife cookbook upload apt
# knife cookbook upload ceph

Boot up a couple more VMs to be Ceph nodes, using the appliance image from last time. These need chef-client installed, and need to be registered with the chef server. knife bootstrap will install chef-client and dependencies for you, but after looking at the source, if /usr/bin/chef doesn’t exist, it actually uses wget or curl to pull http://opscode.com/chef/install.sh and runs that. How this is considered a good idea is completely baffling to me, so again I installed our chef build from OBS on each of my Ceph nodes (note to self: should add this to appliance image on Studio):

# zypper ar -f http://download.opensuse.org/repositories/systemsmanagement:/chef:/10/openSUSE_12.3/systemsmanagement:chef:10.repo
# zypper in rubygem-chef

And ran the now-arguably-safe knife bootstrap from my desktop:

# knife bootstrap ceph-0.example.com
Bootstrapping Chef on ceph-0.example.com
[...]
# knife bootstrap ceph-1.example.com
Bootstrapping Chef on ceph-1.example.com
[...]

Then, roughly following the Ceph Deploying with Chef document.

Generate a UUID and monitor secret (had to do the latter on one of my Ceph VMs, as ceph-authtool is conveniently already installed):

# uuidgen -r
f80aba97-26c5-4aa3-971e-09c5a3afa32f
# ceph-authtool /dev/stdout --name=mon. --gen-key
[mon.]
key = AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g==

Then on my desktop:

knife environment create Ceph

This I filled in with:

{
  "name": "Ceph",
  "description": "",
  "cookbook_versions": {
  },
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "default_attributes": {
    "ceph": {
      "monitor-secret": "AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g==",
      "config": {
        "fsid": "f80aba97-26c5-4aa3-971e-09c5a3afa32f",
        "mon_initial_members": "ceph-0,ceph-1",
        "global": {
        },
        "osd": {
          "osd journal size": "1000",
          "filestore xattr use omap": "true"
        }
      }
    }
  },
  "override_attributes": {
  }
}

Uploaded roles:

# knife role from file cookbooks/ceph/roles/ceph-mds.rb
# knife role from file cookbooks/ceph/roles/ceph-mon.rb
# knife role from file cookbooks/ceph/roles/ceph-osd.rb
# knife role from file cookbooks/ceph/roles/ceph-radosgw.rb

Assigned roles to nodes:

# knife node run_list add ceph-0.example.com 'role[ceph-mon],role[ceph-osd],role[ceph-mds]'
# knife node run_list add ceph-1.example.com 'role[ceph-mon],role[ceph-osd],role[ceph-mds]'

I didn’t bother with recipe[ceph::repo] as I don’t care about installation right now (Ceph is already installed in my VM images).

Had to set "chef_environment": "Ceph" for each node by running:

# knife node edit ceph-0.example.com
# knife node edit ceph-1.example.com

Didn’t set Ceph osd_devices per node – I’m just playing, so can sit on top of the root partition.

Now let’s see if it works:

# knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-11T13:44:47+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-11T13:44:48+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-11T13:44:48+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-11T13:44:48+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-11T13:44:48+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-11T13:44:48+00:00] INFO: Running start handlers
[2013-04-11T13:44:48+00:00] INFO: Start handlers complete.
[2013-04-11T13:44:48+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
No ceph-mon found.

[2013-04-11T13:44:48+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] backed up to /var/chef/backup/etc/ceph/ceph.conf.chef-20130411134448
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] updated content
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] owner changed to 0
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] group changed to 0
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] mode changed to 644
[2013-04-11T13:44:48+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-11T13:44:48+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
creating /var/lib/ceph/tmp/ceph-ceph-0.mon.keyring
added entity mon. auth auth(auid = 18446744073709551615 key=AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g== with 0 caps)
ceph-mon: mon.noname-a 192.168.4.118:6789/0 is local, renaming to mon.ceph-0
ceph-mon: set fsid to f80aba97-26c5-4aa3-971e-09c5a3afa32f
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-0 for mon.ceph-0
[2013-04-11T13:44:49+00:00] INFO: execute[ceph-mon mkfs] ran successfully
[2013-04-11T13:44:49+00:00] INFO: execute[ceph-mon mkfs] sending start action to service[ceph_mon] (immediate)
[2013-04-11T13:44:49+00:00] INFO: Processing service[ceph_mon] action start (ceph::mon line 23)
[2013-04-11T13:44:49+00:00] INFO: service[ceph_mon] started
[2013-04-11T13:44:49+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 64)
connect to
/var/run/ceph/ceph-mon.ceph-0.asok
failed with
(2) No such file or directory

connect to
/var/run/ceph/ceph-mon.ceph-0.asok
failed with
(2) No such file or directory

[2013-04-11T13:44:49+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-11T13:44:49+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 79)
2013-04-11 13:44:49.928800 7f58e9677700 0
-- :/23863 >> 192.168.4.117:6789/0 pipe(0x18f0d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault

2013-04-11 13:44:52.928739 7f58efc1c700 0 -- :/23863 >> 192.168.4.118:6789/0 pipe(0x7f58e0000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 13:44:55.929375 7f58e9677700 0 -- :/23863 >> 192.168.4.117:6789/0 pipe(0x7f58e0003010 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 13:44:58.929211 7f58efc1c700 0 -- :/23863 >> 192.168.4.118:6789/0 pipe(0x7f58e00039f0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 13:45:01.929787 7f58e9677700 0 -- :/23863 >> 192.168.4.117:6789/0 pipe(0x7f58e00023b0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
[...]

And it’s stuck there, trying and failing to talk to something.

See those “no such file or directory” errors after “service[ceph_mon] started”? Yeah? Well, the mon isn’t started, hence the missing sockets in /var/run/ceph.

Why isn’t the mon started? Turns out the ceph init script won’t start any mon (or osd or mds for that matter) if you don’t have entries in the config file with some suffix, e.g. [mon.a]. And all I’ve got is:

[global]
  fsid =  f80aba97-26c5-4aa3-971e-09c5a3afa32f
  mon initial members = ceph-0,ceph-1
  mon host = 192.168.4.118:6789, 192.168.4.117:6789

[osd]
    osd journal size = 1000
    filestore xattr use omap = true

But given the mon recipe triggers ceph-mon-all-starter if using upstart (which it would be, on the “Tested as working: Ubuntu Precise”), and ceph-mon-all-starter seems to just ultimately run something like ceph-mon --cluster=ceph -i ceph-0 regardless of what’s in the config file… Maybe I can cheat.

Directly starting ceph-mon from a shell on ceph-0 before the chef-client run turned out to be a bad idea (bit of a chicken and egg problem figuring out what to inject into the “mon host” line of the config file). So I put a bit of evil into the mon recipe:

diff --git a/recipes/mon.rb b/recipes/mon.rb
index 5cd76de..a518830 100644
--- a/recipes/mon.rb
+++ b/recipes/mon.rb
@@ -61,6 +61,10 @@ EOH
   notifies :start, "service[ceph_mon]", :immediately
 end
 
+execute 'hack to force mon start' do
+  command "ceph-mon --cluster=ceph -i #{node['hostname']}"
+end
+
 ruby_block "tell ceph-mon about its peers" do
   block do
     mon_addresses = get_mon_addresses()

Try again:

# knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-11T15:10:43+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-11T15:10:44+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-11T15:10:44+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-11T15:10:44+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-11T15:10:44+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-11T15:10:44+00:00] INFO: Running start handlers
[2013-04-11T15:10:44+00:00] INFO: Start handlers complete.
[2013-04-11T15:10:44+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
[2013-04-11T15:10:44+00:00] INFO: Storing updated cookbooks/ceph/recipes/mon.rb in the cache.
No ceph-mon found.

[2013-04-11T15:10:44+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-11T15:10:44+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-11T15:10:44+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
[2013-04-11T15:10:44+00:00] INFO: Processing execute[hack to force mon start] action run (ceph::mon line 65)
starting mon.ceph-0 rank 1 at 192.168.4.118:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph-0 fsid f80aba97-26c5-4aa3-971e-09c5a3afa32f
[2013-04-11T15:10:44+00:00] INFO: execute[hack to force mon start] ran successfully
[2013-04-11T15:10:44+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 69)
adding peer 192.168.4.118:6789/0 to list: 192.168.4.117:6789/0,192.168.4.118:6789/0

adding peer 192.168.4.117:6789/0 to list: 192.168.4.117:6789/0,192.168.4.118:6789/0

[2013-04-11T15:10:44+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-11T15:10:44+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 84)
2013-04-11 15:10:44.432266 7f8f9f8c0700  0 
-- :/25965 >> 192.168.4.117:6789/0 pipe(0x16d9d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault

2013-04-11 15:10:50.433053 7f8f9f7bf700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94001d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 15:10:56.433268 7f8fa5e65700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94001d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 15:11:02.433987 7f8f9f8c0700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94002db0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 15:11:08.434358 7f8f9f7bf700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94004fb0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault

At this point it’s stalled presumably waiting to talk to the other mon, so in another terminal window had to kick off a chef-client run on ceph-1 to get it into the same state as ceph-0 (knife ssh name:ceph-1.example.com -x root chef-client). This allowed both nodes to progress to the next problem:

2013-04-11 15:11:28.563438 7f8fa5e67780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-11 15:11:28.563443 7f8fa5e67780 -1 unable to authenticate as client.admin
2013-04-11 15:11:28.563814 7f8fa5e67780 -1 ceph_tool_common_init failed.
2013-04-11 15:11:29.572208 7f2369130780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-11 15:11:29.572210 7f2369130780 -1 unable to authenticate as client.admin
2013-04-11 15:11:29.572527 7f2369130780 -1 ceph_tool_common_init failed.
2013-04-11 15:11:31.380073 7f1907d18780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-11 15:11:31.380078 7f1907d18780 -1 unable to authenticate as client.admin
2013-04-11 15:11:31.380720 7f1907d18780 -1 ceph_tool_common_init failed.
2013-04-11 15:11:32.392345 7fc2bc462780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
[...]

And we’re spinning again.

But that’s enough for one day.