Building RAID over network share

This How-To will explain how to use remote SMB shares in order to build a RAID device
that can be shared back as over NFS/SMB . The idea is very simple , to use all storage resources
on our LAN in order to build a central NFS/SMB share (can be use for private clouds like CloudStack) .
This was tested with Debian 6 x86_64 , but you will find all command are similar on all Linux distributions .
Note that this configuration is experimental and should not be use for production ( yet )
1. create a device file on each share , execute on all nodes
change [X] to match node number the size is bs*count , in this case 1024M

~# dd if=/dev/zero of=/myshare/node[X].img bs=1M count=1024

 
2. mount this shares to central server , run only on master server
make directory to hold all smb mounts

~# mkdir -p /nodeshares/node{1,2,3}

 
mount each share into its own directory , note that the password is the one chosen on smbpasswd command

~# mount -t cifs -o username=myname,password=1 //192.168.100.11/myshare /nodeshares/node1
~# mount -t cifs -o username=myname,password=1 //192.168.100.12/myshare /nodeshares/node2
~# mount -t cifs -o username=myname,password=1 //192.168.100.13/myshare /nodeshares/node3

 
3. here comes the fun part , create loop devices from the files we created on step 2
note that the files are actually on a remote host .

~# losetup -v /dev/loop1 /nodeshares/node1/node1.img ~# losetup -v /dev/loop2 /nodeshares/node2/node2.img ~# losetup -v /dev/loop3 /nodeshares/node3/node3.img
4. build RAID device out of all shared device files .create RAID level 5 using the loop devices , the devices order does matter~# mdadm –create /dev/md0 –level=5 –raid-disks=3 /dev/loop1 /dev/loop2 /dev/loop3
watch the build process by viewing mdstat~# cat /proc/mdstat
5. create a partition on md1 , mount and export the new partition as NFS or SMB back to the network .— Persistancy —
In order to keep the RAID once the master server boots , we will need to tell the kernel
to look for any MD devices and then refresh mdadm . edit /etc/mdadm/mdadm.conf and add the loop devices by
adding this line :DEVICE /dev/loop*
then run this command to add the new created RAID into persistancymdadm –detail –scan >> /etc/mdadm/mdadm.conf
— what happens when a node restart or gets disconnected —
the mdadm will see the loop device as faulty and mark it as fail , can be seen on mdstat
in this example i shutdown samba on node3~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 loop3[3](F) loop1[0] loop2[1] 1047552 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]unused devices:To repair it you need to remove the faulty device~# mdadm –manage /dev/md0 –remove /dev/loop3 mdadm: hot removed /dev/loop3 from /dev/md1after you remount the node , you can add the device back to the array~# mdadm –manage /dev/md0 –add /dev/loop3 mdadm: added /dev/loop3