[nycbug-talk] ZFS question
Charles Sprickman
spork at bway.net
Mon Sep 26 22:36:48 EDT 2011
On Sep 26, 2011, at 9:50 PM, Isaac Levy wrote:
> On Sep 26, 2011, at 9:13 PM, George Rosamond wrote:
>
>> Quick question for the ZFS users in the audience:
>>
>> The box in question will have 8 SATA drives on it. The RAID card (which isn't going to be doing any RAID) has four SATA ports. There's four more SATA ports on the motherboard.
>>
>> Anything I should be concerned about here in terms of performance and data integrity?
>>
>> TIA
>>
>> George
>
> Cool- I did this a while back when ZFS on FreeBSD was really green, I hacked hw together with as many sata cards as I could dig out- ZFS was itself rough-edged and new back then so my results were, um, a good learning experience :)
> The mixed SATA ports themselves didn't seem to cause any particular grief.
>
> If you're concerned about the performance/reliability mixing the ports, I suggest running some tests using good ol' bonnie++. It sounds tedious, but it's so fast to build/destroy zpools, it's really quite fun and quick. I would run a straight bonnie++ run on the box in the following configurations and see what suits you:
>
> - All 8 drives as one big zpool
> - split drives into 2 pools: 4 on the card, 4 on the motherboard
> - a single drive as a zpool
>
> Other fun tests, should you have time/interest:
> - raidz vs raidz2
> - zfs block-level gzip compression
This one sometimes gives interesting results depending on what type of data you're storing. Hint: Being able to read more than one block worth of data while reading just one block off the drive can speed some things up and the cpu hit is negligible.
> - atime on, atime off
>
> Sounds like a tall order to test, but it's so easy- since there's no waiting around for newfs to complete... And if it bombs out, you'll have a chance to see why.
>
> --
> The SATA ports should themselves operate normally, still the most important parts for running ZFS: 64 bit cpu, and as much memory as possible.
Here's a few additional tidbits:
-If you end up with a bunch of mirrors and put those in a pool, it's probably wise to split your mirrors across the controllers.
-Be aware of that nasty 4K sector issue on some of the new high capacity SATA units.
-ZFS on root is not as scary as it sounds
-Do not believe that if you run amd64 you don't have to still put black magic in loader.conf
-Most FreeBSD-specific docs on zfs are outdated, rely on the mailing list archives
-mfsbsd is your best "rescue disk" solution - http://mfsbsd.vx.sk/ - also some of Martin's backports of fixes/enhancements can be very worthwhile
-Make sure you're using AHCI
-Snapshots are awesome. Automated snapshots are awesome-er (http://people.freebsd.org/~rse/snapshot/ - I ditch the automounter stuff though).
-Anecdotal, but I've been doing very, very bad things to both my home box and some older test boxes (intentional power loss, running panic-y kernels, yanking drives in improper ways, etc.) and zfs (w/root fs on zfs, btw) has proven to be totally robust in the face of this mistreatment - more than I can say for a few UFS2 boxes that have been.
-MOAR RAM
C
>
> Here's some more fun notes, lots of hardware/zfs intricacies discussed:
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
>
> And for the record, tons of working info on FreeBSD ZFS:
> http://wiki.freebsd.org/ZFS
>
> Best,
> .ike
>
>
> _______________________________________________
> talk mailing list
> talk at lists.nycbug.org
> http://lists.nycbug.org/mailman/listinfo/talk
More information about the talk
mailing list