GlusterFS on the cheap with Rackspace’s Cloud Servers or Slicehost

September 15, 2010 | building43

Guest post from Major Hayden.

High availability is certainly not a new concept, but if there’s one thing that frustrates me with high availability VM setups, it’s storage. If you don’t mind going active-passive, you can set up DRBD, toss your favorite filesystem on it, and you’re all set.

If you want to go active-active, or if you want multiple nodes active at the same time, you need to use a clustered filesystem like GFS2, OCFS2 or Lustre. These are certainly good options to consider but they’re not trivial to implement. They usually rely on additional systems and scripts to provide reliable fencing and STONITH capabilities.

What about the rest of us who want multiple active VM’s with simple replicated storage that doesn’t require any additional elaborate systems? This is where GlusterFS really shines. GlusterFS can ride on top of whichever filesystem you prefer, and that’s a huge win for those who want a simple solution. However, that means that it has to use fuse, and that will limit your performance.

Let’s get this thing started!

Consider a situation where you want to run a WordPress blog on two VM’s with load balancers out front. You’ll probably want to use GlusterFS’s replicated volume mode (RAID 1-ish) so that the same files are on both nodes all of the time. To get started, build two small Slicehost slices or Rackspace Cloud Servers. I’ll be using Fedora 13 in this example, but the instructions for other distributions should be very similar.

First things first — be sure to set a new root password and update all of the packages on the system. This should go without saying, but it’s important to remember. We can clear out the default iptables ruleset since we will make a customized set later:

# iptables -F
# /etc/init.d/iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:        [  OK  ]

GlusterFS communicates over the network, so we will want to ensure that traffic only moves over the private network between the instances. We will need to add the private IP’s and a special hostname for each instance to /etc/hosts on both instances. I’ll call mine gluster1 and gluster2:

10.xx.xx.xx gluster1
10.xx.xx.xx gluster2

You’re now ready to install the required packages on both instances:

yum install glusterfs-client glusterfs-server glusterfs-common glusterfs-devel

Make the directories for the GlusterFS volumes on each instance:

mkdir -p /export/store1

We’re ready to make the configuration files for our storage volumes. Since we want the same files on each instance, we will use the --raid 1 option. This only needs to be run on the first node:

# glusterfs-volgen --name store1 --raid 1 gluster1:/export/store1 gluster2:/export/store1
Generating server volfiles.. for server 'gluster2'
Generating server volfiles.. for server 'gluster1'
Generating client volfiles.. for transport 'tcp'

Once that’s done, you’ll have four new files:

  • booster.fstab – you won’t need this file
  • gluster1-store1-export.vol – server-side configuration file for the first instance
  • gluster2-store1-export.vol – server-side configuration file for the second instance
  • store1-tcp.vol – client side configuration file for GlusterFS clients

Copy the gluster1-store1-export.vol file to /etc/glusterfs/glusterfsd.vol on your first instance. Then, copy gluster2-store1-export.vol to /etc/glusterfs/glusterfsd.vol on your second instance. The store1-tcp.vol should be copied to /etc/glusterfs/glusterfs.vol on both instances.

At this point, you’re ready to start the GlusterFS servers on each instance:

/etc/init.d/glusterfsd start

You can now mount the GlusterFS volume on both instances:

mkdir -p /mnt/glusterfs
glusterfs /mnt/glusterfs/

You should now be able to see the new GlusterFS volume in both instances:

# df -h /mnt/glusterfs
Filesystem            Size  Used Avail Use% Mounted on
                      9.4G  831M  8.1G  10% /mnt/glusterfs

As a test, you can create a file on your first instance and verify that your second instance can read the data:

[root@gluster1 ~]# echo "We're testing GlusterFS" > /mnt/glusterfs/test.txt
[root@gluster2 ~]# cat /mnt/glusterfs/test.txt
We're testing GlusterFS

If you remove that file on your second instance, it should disappear from your first instance as well.

Obviously, this is a very simple and basic implementation of GlusterFS. You can increase performance by making dedicated VM’s just for serving data and you can adjust the default performance options when you mount a GlusterFS volume. Limiting access to the GlusterFS servers is also a good idea.

If you want to read more, I’d recommend reading the GlusterFS Technical FAQ and the GlusterFS User Guide.

This post was originally posted on May 27, 2010 on Major’s blog, Racker Hacker.

Major Hayden is a Linux Systems Engineer for Rackspace in San Antonio. He works with the Cloud Servers and Slicehost virtualization products. Major’s primary focus is on base image maintenance, kernel customization and tactical optimization solutions. He also maintains multiple blogs and a MySQL optimization script called mysqltuner. Outside of Rackspace, Major enjoys contributing to the open source community, running, and taking care of his chinchillas.

This post was tagged:


Roman J September 24, 2010 at 12:06 pm

No matter what cloud server instance you take, you’ll get the pre-formatted storage with one big ext3 file system on it.
It makes impossible to run write-intensive operations with a lot of files (like mysql innodb with tablespace per table configuration for example).
Also, your cloud instance would share the same physical storage with tens of other instances running on the same host.
So there is no point to order a 620GB node (620GB node on ext3? nobody would do that on a physical server these days) while you cannot effectively utilize it without lock-ups and downs on the storage filesystem/storage level.

Major Hayden September 17, 2010 at 11:28 am

Kent -

Thanks for the comment! The network bandwidth is currently limited on the internal network between Cloud Servers instances, so that will be a limiting factor for your GlusterFS performance.

Additional block storage would make it much easier (and cost-effective) to use – I completely agree. I’ll be sure that your feedback makes it to the right place.


Kent Langley September 15, 2010 at 2:07 pm

The issue w/ running GlusterFS on Rackspace is that there is no way to add more block storage to an individual node. Also, according to the price list here:

So, a 620GB node would be ~$700 /mo or $1.12 per GB. Then, of course, you need at least two or you haven’t really done anything useful. So, your price per GB will double to $2.24 / GB. That’s quite expensive.

Or, you’re limited to tiny little nodes. A 256MB instance with 20GB of storage * for about $11 / mo is $0.55 / GB * 2 for $1.10 / GB.

* You won’t be able to use all 20GB for your volumes.

Basically RackspaceCloud is missing an EC2 EBS-like analog for more affordable block storage.

The network bandwidth is severely limited for the smaller instances. This could be problematic. Or, is it not limited on the internal interfaces? That is unclear to me at the moment.

I love rackspacecloud and use it all the time. But, I probably would not use it for this in this case for anything very big at all. But, the way described in the article is a nice way to do active/active on a couple of nodes in addition to your applications that might already be there and running in a load balanced way.


Comments on this entry are closed.