[nycbug-talk] DragonFly HAMMER filessytem
Isaac Levy
ike at lesmuug.org
Thu Oct 18 06:11:55 EDT 2007
Wordup Yarema, All,
On Oct 17, 2007, at 5:26 PM, Yarema wrote:
> Perhaps now would be a good time to drop a note to Matt Dillon
> expressing your concerns. Maybe he'll expand these limits now
> instead of "in the future".
What timing to ask!
So I emailed Matt Dillon about it, and long story short, he changed
this just days ago- and now Hammer supports 32768 TB per volume,
(totally good enough for me!), and the design makes it changeable in
again in the future.
Here's the entire message thread with the details:
Begin forwarded message:
> :Hello Matthew,
> :
> :My name is ike, and today on the NYC*BUG-talk mailing list, your
> :designs for the HAMMER filesystem came up... (with excitement, I
> :should add!)
> :...
> :
> : "A volume is thus limited to 16TB..."
> :
> :and it later it states,
> :
> : "HAMMER's on-disk structures are designed to allow future
> :expansion through expansion of these limits."
> :
> :...
> :--
> :With that stated, is there any plan for when the expansion in size
> :for HAMMER filesystem will happen? The rest of the designed features
> :seem pretty spectacular!
> :
> :Thanks in advance for any comments or urls, and good luck out West!
> :
> :Best,
> :.ike
>
> Yah, I corrected the deficiency a few days ago. There was a
> limitation
> in the cluster allocation algorithm because I was trying to get
> away
> with using only one I/O for the hinted radix tree and was
> limited to
> a 16K filesystem buffer to hold the tree. This resulted in a
> limit of
> 32768 512M clusters (16TB).
>
> I corrected it by adding another layer. So now the allocation
> layer
> in the volume header can manage up to 16384 super-cluster
> buffers and
> each super-cluster buffer can manage up to 32768 clusters. So now
> the per-volume limit is 16384x32768 clusters instead of 32768
> clusters.
>
> Each cluster in the new scheme is limited to 4096 16K
> filesystem buffers.
> So the per-volume limit is now 16Kx32Kx4Kx16K = 32768 TB per
> volume.
> That ought to be enough for now.
>
> There can be up to 32768 volumes, too, so the actual filesystem
> size
> limitation is much larger, but I agree that it is easier to manage
> large chunks of disk then to manage lots of smaller chunks so
> fixing
> the per-volume limitation was important.
>
> -Matt
> Matthew Dillon
> <dillon at backplane dot com>
>
He then went on to add,
On Oct 18, 2007, at 2:16 AM, Matthew Dillon wrote:
> :Fantastic, thank you for the lightning-quick response!
> :
> :Would you mind if I post your reply to the NYC*BUG-talk list as-is?
> :
> :Rocket-
> :.ike
>
> Sure, go ahead. I'll add a little too:
>
> The new scheme extends the allocation mechanism all the way down
> to a record boundary in a buffer (64 bytes) and provides hinting
> through all the layers. I am hoping this will result in a fairly
> efficient allocator. HAMMER will cache the most recently
> allocated
> buffers to bypass as much of the allocation layering as possible.
>
> The 4096 filesystem buffers per cluster limit allows record, B-
> Tree,
> and piecemeal data allocations within the cluster to be separately
> tracked through the embedding of 4 small radix allocation trees in
> the cluster header.
>
> So adding that extra major cluster allocation layer actually
> solves
> a whole bunch of problems, not just the max volume size problem.
>
> -Matt
> Matthew Dillon
> <dillon at backplane dot com>
>
Well how about that. Lightning fast timing :)
Looks like it's high time to give Hammer a shot...
On Oct 17, 2007, at 5:26 PM, Yarema wrote:
<snip>
> The BeOS bfs...
<snip similaraties>
Oh man, you just got me WAY interested again, thanks!
Regarding the BeOS, one of my favorite filesystem bedtime stores:
http://www.nobius.org/~dbg/practical-file-system-design.pdf
(1.1mb)
Rocket-
.ike
More information about the talk
mailing list