[nycbug-talk] DragonFly HAMMER filessytem

Peter Wright pete at nomadlogic.org
Wed Oct 17 14:53:39 EDT 2007


> Hi All,
>
<snip>
>
> I see what Hammer is trying to accomplish, and the clustering/
> replication goals are AMAZING- however, I have one big show-stoppers
> for me using it:
>
> "...A volume is thus limited to 16TB..."
>
> I can't move foreword with this.  Right now the largest filesystems I
> touch are just over 10TB, so I'm under the limit- but the people
> using them are outgrowing them at a quick pace.  In another year I
> expect to be working with 30TB for single fileservers.
> (btw those boxes use UFS2 on FreeBSD-6-REL, and the boxes are SOOOO
> stable)
>


<snipping a bunch of stuff>

while i think that hard limit is a concern i've found in practice that
having volumes that large may lead to situations where you put all of you
eggs in one basket.  i.e. all of your storage is dependent upon one piece
of hardware - whereas if you have multiple ~14TB filesystems and a global
namespace you should have a more scalable system that is hopefully more
fault tolerant.

for example: i have a global namespace of
/webroot/websites/site{A->Z}/
where "siteA" lives on one volume, "siteZ" can live on another volume. 
using symlinks or something we can create the apperance that all data
lives on the same filesystem so someone can do a:
$ cd /webroot/websites/siteA
and not care about which filer or volume their data lives on.  if the
filer that hosts "siteA" goes down, becomes loaded etc. we should still be
able to serve data to people living under "siteZ".  going this route you
can have a multi-petabyte "filesystem" that should scale pretty well.


anyway - i'm not trying to say "no one will ever need a file system larger
than 14TB" ;) but fortunately I think in most current production
environments this will be less of an issue.

-pete

-- 
~~oO00Oo~~
Peter Wright
pete at nomadlogic.org
www.nomadlogic.org/~pete
310.869.9459



More information about the talk mailing list