Understanding BlueStore, Ceph’s New Storage Backend

On June 1, 2017 I presented Understanding BlueStore, Ceph’s New Storage Backend at OpenStack Australia Day Melbourne. As the video is up (and Luminous is out!), I thought I’d take the opportunity to share it, and write up the questions I was asked at the end.

First, here’s the video:

The bit at the start where the audio cut out was me asking “Who’s familiar with Ceph?” At this point, most of the 70-odd people in the room put their hands up. I continued with “OK, so for the two people who aren’t…” then went into the introduction.

After the talk we had a Q&A session, which I’ve paraphrased and generally cleaned up here.

With BlueStore, can you still easily look at the objects like you can through the filesystem when you’re using FileStore?

There’s not a regular filesystem anymore, so you can’t just browse through it. However you can use `ceph-objectstore-tool` to “mount” an offline OSD’s data via FUSE and poke around that way. Some more information about this can be found in Sage Weil’s recent blog post: New in Luminous: BlueStore.

Do you have real life experience with BlueStore for how IOPS performance scales?

We (SUSE) haven’t released performance numbers yet, so I will instead refer you to Sage Weil’s slides from Vault 2017, and Allan Samuel’s slides from SCALE 15x, which together include a variety of performance graphs for different IO patterns and sizes. Also, you can expect to see more about this on the Ceph blog in the coming weeks.

What kind of stress testing has been done for corruption in BlueStore?

It’s well understood by everybody that it’s sort of important to stress test these things and that people really do care if their data goes away. Ceph has a huge battery of integration tests, various of which are run on a regular basis in the upstream labs against Ceph’s master and stable branches, others of which are run less frequently as needed. The various downstreams all also run independent testing and QA.

Wouldn’t it have made sense to try to enhance existing POSIX filesystems such as XFS, to make them do what Ceph needs?

Long answer: POSIX filesystems still need to provide POSIX semantics. Changing the way things work (or adding extensions to do what Ceph needs) in, say, XFS, assuming it’s possible at all, would be a big, long, scary, probably painful project.

Short answer: it’s really a different use case; better to build a storage engine that fits the use case, than shoehorn in one that doesn’t.

Best answer: go read New in Luminous: BlueStore ;-)

2 thoughts on “Understanding BlueStore, Ceph’s New Storage Backend

  1. Thanks for the update Tim,.

    I’m the person you had a brief chat to after and asked both the questions re: data navigation and iops performance.

    Certainly a big step forward for Ceph and I’ll likely be trying it out in a few months time as a potential file attachment store for a few kubes clusters.

  2. Hi Tim, went to one of your sessions at Susecon. Just watched this and found it useful. Cheers,

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>