Distributed RAID over ISCSI

In an early post building-raid-over-network-share I had
show how to build distributed RAID over network using CIFS (samba or windows share)
now i wanted to show you a similar method by using ISCSI .
ISCSI is a protocol supported by all OS versions Windows,Linux,BSD,Solaris,AIX,OSX and other OS out there .
In this how to , i am going to use CentOS 6 . we will have 3 node servers as iscsi target and one server as
initiator that will be managing the RAID and will also act as target for the created RAID device .
The explain is very brief and will not cover all ISCSI settings , for better performance
I do suggest reading documentation for each individual distribution OS . another suggestion it is highly recommend
using a spare network interface with dedicated VLAN for ISCSI as it may load the network with high traffic .
node settings ( do this step at all nodes )
1. install scsi-target-utils

# yum -y install scsi-target-utils

2. create a device file

# dd if=/dev/zero of=/mydrive.file bs=1M count=1024

3. edit /etc/tgt/targets.conf , and add a new target lun .
make sure each server uses a uniq iqn id

<target iqn.2013-05:node1.0>
backing-store /mydrive.file

4. start the tgtd service

# service tgtd start

5. on the initiator server ( our mdadm raid master ) search for the iscsi devices

# iscsiadm -m discovery -t sendtargets -p,1 iqn.2013-05:node1.0
# iscsiadm -m discovery -t sendtargets -p,1 iqn.2013-05:node2.0
# iscsiadm -m discovery -t sendtargets -p,1 iqn.2013-05:node3.0

6. start iscasi service and get toy know the new devices

# service iscsi start

* the command “fdisk -l” may now show new scsi devices , in my case its sda,sdb,sdc
—- time to build the soft RAID —–
1. install mdadm

# yum search mdadm

2. create a new RAID device

# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc 

from now on you may have a new RAID device called /dev/md0 you can use ,
* update
its a good idea to add a spare device to the raid , and even better to use the initiator itself ( the host that run the mdadm )
so I create a new device using dd command , then create a loop device then add it to the raid

dd if=/dev/zero of=/myfile.img bs=1M count=100
losetup -v /dev/loop1 /myfile.img
mdadm --add /dev/md0 /dev/loop1

So now the RAID may look like this :

 Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
3 8 32 2 active sync /dev/sdc
4 7 1 - spare /dev/loop1