Harware vs. Software RAID (WAS: Re: [nycbug-talk] Fwd: Adaptec AAC raid support)
Mon Mar 21 17:05:50 EST 2005
Jay Savage wrote:
> It depends on how big you need your RAID to be, among other things,
> and what hardware you want to build it with. On IDE/ATA, you need one
> controller per disk or quickly run out of bandwidth. So your RAID is
> effectively limited to the number of PCI slots in your box.
I have some experience with building large soft-raids using PCI IDE
cards on FreeBSD and Linux.
Some problems with this approach:
* There are either few or no PCI-X IDE controllers out there (unless you
count crappy firmware raid cards, which could work, I guess - but the
price advantage of soft raid starts to go away). Hopefully PCIe will
change this state of affairs. But using PCI only is not going to give
you great performance - and that may be OK for some applications.
I've found that using hardware RAID cards is essential for performance,
if for no other reason than they work fine with wide PCI slots. I've
also used the "many IDE card" approach for situations where "good
enough" performance was, well, good enough.
* At least one vendor, Promise, puts a "bug" in their firmware that
limits how many PCI IDE cards can go in one system. I've verified this
with Promise; their firmware makes any more than two cards simply not
function. Avoid Promise at all costs. It is also worth mentioning that
this was something that happened with newer firmware - older cards with
downgrade firmware do not exhibit this, when flashed with newer
firmware, they do.
I've found SIIG cards, using a Silicon Image chipset, to be unencumbered
by such foolishness, and they work fine (at least under Linux, haven't
use them under FreeBSD as of yet).
More information about the talk