SATA controllers

Liam Proven lproven at gmail.com
Sun Mar 14 03:03:27 UTC 2010


On Sat, Mar 13, 2010 at 3:12 PM, Chan Chung Hang Christopher
<christopher.chan at bradbury.edu.hk> wrote:
> Liam Proven wrote:
>> On Wed, Mar 10, 2010 at 3:19 PM, Dave Howorth
>> <dhoworth at mrc-lmb.cam.ac.uk> wrote:
>>> I'm speccing a new machine and as part of it I'd like to have a
>>> linux-controlled 4-disk RAID 6 array using SATA 3 Gbps disks (aka STA 2).
>>
>> You want more than 4 disks for RAID6. Seriously.
>>
>> RAID6 uses 2 parity drives; this means you get the capacity of (N-2)
>> drives, where N is the number of drives. Ergo, use 4 drives, you only
>> get the capacity of 2. This is pointless, because if you lose half the
>> capacity, you would get /much/ better performance from RAID10 (a
>> mirror of stripes) or RAID 0+1 (a stripe set of mirror pairs).
>
> md raid10 != md raid1+0. You can do a lot of fancy configs with the
> raid10 module that is not quite the same as nested raid1+0.

Do tell...? Always keen to learn!

> Anyway,
> raid6 can lose any two drives whereas raid1+0 allows you to lose a
> specific combination of two drive loss. So you choose between
> performance or higher chance of survival. Given that disks of the same
> batch can have the tendency of dying together it is a risk that is worth
> considering.

That *is* a good point, I must concede.

>> (I may have got the definitions of 0+1 and 10 transposed, but it's not
>> really important at this point!)
>>
>> RAID levels 0+1 and 10 use simple mirroring and striping, requiring
>> little CPU, therefore making them very fast, whereas with RAID6 you
>> impose 2 sets of parity calculations on the system, making writes
>> slow.
>
> How much cpu is used is irrelevant because raid5 has been easily catered
> for (given no really heavy duty cpu chewing service like
> spamassassin+amavisd) since the AMD Duron.

Well, yes, the PC can take it & you won't notice much load, but it
does negatively impact the write performance of the array, and also
the rebuild time.

> The real performance breaker
> for raid5/6 is the bus traffic necessary to read from disk before parity
> calculations can be done and the bus contention. Therefore, hardware
> raid cards that use any processor better than an Intel i960 with a
> sufficiently large cache will blow Linux software raid out of the water
> - sometimes even if the md array is raid1+0.

Hmm. May depend on the power of the controller. I have such a
controller & it's *not* quick. I suspect an md RAID would be quicker.
But mine is a few years old.

>> This means the *minimum* number of drives for a RAID6 is 5 drives,
>> which will give you the capacity of 3× a single drive.
>
> RAID5 = 3. I do not see why RAID6 has to jump to 5.

>> I don't know how smart the Linux software RAID system is, but if it
>> has good rules built in, it won't let you create a RAID 6 out of <5
>> drives and should outright block 4. It's an invalid config.
>
> Outright block 3.

Well, OK. You do make a valid point about sensitivity to particular
disks failing in 0+1/10. I still contend that the minimum /sensible/
number for RAID6 is 5 disks, though.

-- 
Liam Proven • Profile: http://www.linkedin.com/in/liamproven
Email: lproven at cix.co.uk • GMail/GoogleTalk/Orkut: lproven at gmail.com
Tel: +44 20-8685-0498 • Cell: +44 7939-087884 • Fax: + 44 870-9151419
AOL/AIM/iChat/Yahoo/Skype: liamproven • LiveJournal/Twitter: lproven
MSN: lproven at hotmail.com • ICQ: 73187508




More information about the ubuntu-users mailing list