[nycbug-talk] BSD Cluster Filesystem Roundup

Pete Wright pete at nomadlogic.org
Mon Feb 23 12:41:07 EST 2009

On 23-Feb-09, at 6:14 AM, Steven Kreuzer wrote:

> On 2/17/2009, DragonFly 2.2 was released[1]. One of the most
> interesting aspects of this release is that HAMMER is now considered
> production ready. If you are not familiar with what HAMMER is, check
> out Matt Dillon's talk from NYCBSDCon 2008 [2] for more information.
> I am curious if anyone has played around with HAMMER and would be
> willing to provide us with a trip report? Actually, I would be curious
> to find out what your experience has been with DragonFly as a whole.

+1 for me on that too, having read matt's papers on dfly it does look  
interesting, although i've been pretty happy with my freebsd nodes  
once we got past 5.x.

> Recently, I stumbled upon gluster[3] which is an open source (GPL3)
> clustered filesystem that supports Linux, Mac OS X and FreeBSD 7.
> It aggregates various storage bricks over Infiniband RDMA or TCP/IP
> interconnect into one large parallel network file system. It makes
> use of FUSE which seems a bit suspect, but they say they can still
> sustain 1 GB/s per storage brick over Infiniband RDMA. To me, this
> looks like the most promising clustered filesystem that supports BSD.
> I guess the question becomes, what other clustered filesystems are
> there that support BSD and has anyone deployed them to production?

too bad it's GPLv3.  i'd be suspicious of using FUSE, although i  
reckon it helps support multiple platforms.  according to the roadmap  
it looks like they are close to implementing code to guard against  
split-brains and the like, that's a big one so hopefully they can work  
that out.

funny - i know of a handful of products that are based on freebsd that  
support clustered storage (NetAppGX, isilon), but at least zfs support  
is coming along for freebsd, so hopefully it's only a matter of time  
before someone develops a way to cluster ZFS heads together <grin>

as an aside, i was looking at gfs (google filesystem) workalikes.   
this looked pretty interesting:

from a performance standpoint i'd be interested in seeing how it does,  
but its interesting non-the-less...


More information about the talk mailing list