Yesterday Anton Prassl here at the MUG asked me about strange behavior of his DC4320 raid controller. He had four disks of similar size on attached to the controller and configured them as RAID 5 but after booting Linux he was presented with two RAID 1 Disks each consisting of two of the attached disks.
I could only point him to the dmraid man page, due to my missing knowledge with those controllers. Antons further investigations turned up that there is a problem with multiple metadata segments on the disks and that dmraid was loading an old setup (the two RAID-1 setups) instead of the new RAID 5.
This happens because Nvidia and SIL use different formats for storing RAID information on the disk. But DM-RAID offers an option to force the use of a specified format through the "-f" switch.
To list all supported formats use:
dmraid -l
This will print a list similar to this:
asr : Adaptec HostRAID ASR (0,1,10) ddf1 : SNIA DDF1 (0,1,4,5,linear) hpt37x : Highpoint HPT37X (S,0,1,10,01) hpt45x : Highpoint HPT45X (S,0,1,10) isw : Intel Software RAID (0,1) jmicron : JMicron ATARAID (S,0,1) lsi : LSI Logic MegaRAID (0,1,10) nvidia : NVidia RAID (S,0,1,10,5) pdc : Promise FastTrack (S,0,1,10) sil : Silicon Image(tm) Medley(tm) (0,1,10) via : VIA Software RAID (S,0,1,10) dos : DOS partitions on SW RAIDs
In our case there were both "nvidia" and "sil" segements on the disks and the RAID 5 was configured by the "nvidia" segments.
The only thing left to do was to tell dmraid which format to use:
dmraid -f nvidia ...
This can also be set for the whole system, but the file where it can be configured differs from Distribution to Distribution. For Debian one has to alter the file /sbin/dmraid-activate where several calls to dmraid binary are made.
If a DM-RAID device is set up to serve as the root filesystem you should take care to include the appropriate modules in your initrd-image. In Debian a simple "update-initramfs -k all -u" should be sufficient.