RAID 5 With LVM on Ubuntu |
Server Training - Server Management |
RAID 5 provides the best of both worlds in software RAID, speed and redundancy. You will need at least 3 separate drive partitions on 3 separate drives in order to create RAID 5. This tutorial also will show you how to install Logical Volume Management on the RAID 5 array. You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu. For a tutorial on that CLICK HERE. You may have to create the RAID device first by indicating the RAID device with the block major and minor numbers. Be sure to increment the "2" number by one each time you create an additional RAID device. # mknod /dev/md2 b 9 3 This will create the device if you have already used /dev/md0 and /dev/md1. Ignore this if this is the first RAID array. Create RAID 5
# mdadm --create /dev/md4 --level=5 --spare-devices=1 --raid-devices=3 /dev/sdb9 /dev/sdb10 /dev/sdb11 /dev/sdb12 --create --level=5 --spare-devices=1 Note: for illustration or practice this shows 4 partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows 4 RAID partitions. mdadm: array /dev/md4 started. Verify the Process. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md4 : active raid5 sdb11[4] sdb12[3](S) sdb10[1] sdb9[0] 995712 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] [==========>..........] recovery = 52.9% (264596/497856) finish=0.2min speed=14699K/sec
Check /var/log/messages May 19 10:45:42 ub1 kernel: [ 540.691109] md: bind<sdb9> May 19 10:45:42 ub1 kernel: [ 540.693369] md: bind<sdb10> May 19 10:45:42 ub1 kernel: [ 540.695666] md: bind<sdb12> May 19 10:45:42 ub1 kernel: [ 540.697895] md: bind<sdb11> May 19 10:45:42 ub1 kernel: [ 540.767398] True protection against single-disk failure might be compromised. May 19 10:45:42 ub1 kernel: [ 540.767406] raid5: device sdb10 operational as raid disk 1 May 19 10:45:42 ub1 kernel: [ 540.767410] raid5: device sdb9 operational as raid disk 0 May 19 10:45:42 ub1 kernel: [ 540.767941] raid5: allocated 3170kB for md4 May 19 10:45:42 ub1 kernel: [ 540.767953] RAID5 conf printout: May 19 10:45:42 ub1 kernel: [ 540.767955] --- rd:3 wd:2 May 19 10:45:42 ub1 kernel: [ 540.767959] disk 0, o:1, dev:sdb9 May 19 10:45:42 ub1 kernel: [ 540.767963] disk 1, o:1, dev:sdb10 May 19 10:45:42 ub1 kernel: [ 540.768070] RAID5 conf printout: May 19 10:45:42 ub1 kernel: [ 540.768079] --- rd:3 wd:2 May 19 10:45:42 ub1 kernel: [ 540.768084] disk 0, o:1, dev:sdb9 May 19 10:45:42 ub1 kernel: [ 540.768088] disk 1, o:1, dev:sdb10 May 19 10:45:42 ub1 kernel: [ 540.768091] disk 2, o:1, dev:sdb11 May 19 10:45:42 ub1 kernel: [ 540.768116] RAID5 conf printout: May 19 10:45:42 ub1 kernel: [ 540.768119] --- rd:3 wd:2 May 19 10:45:42 ub1 kernel: [ 540.768122] disk 0, o:1, dev:sdb9 May 19 10:45:42 ub1 kernel: [ 540.768125] disk 1, o:1, dev:sdb10 May 19 10:45:42 ub1 kernel: [ 540.768129] disk 2, o:1, dev:sdb11 May 19 10:45:42 ub1 kernel: [ 540.768986] md: recovery of RAID array md4 May 19 10:45:42 ub1 kernel: [ 540.768996] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. May 19 10:45:42 ub1 kernel: [ 540.769002] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. May 19 10:45:42 ub1 kernel: [ 540.769015] md: using 128k window, over a total of 497856 blocks. May 19 10:46:15 ub1 kernel: [ 573.687527] md: md4: recovery done. May 19 10:46:15 ub1 kernel: [ 573.871341] RAID5 conf printout: May 19 10:46:15 ub1 kernel: [ 573.871353] --- rd:3 wd:3 May 19 10:46:15 ub1 kernel: [ 573.871356] disk 0, o:1, dev:sdb9 May 19 10:46:15 ub1 kernel: [ 573.871358] disk 1, o:1, dev:sdb10 May 19 10:46:15 ub1 kernel: [ 573.871364] disk 2, o:1, dev:sdb11
Fail a Device. #mdadm /dev/md4 -f /dev/sdb9 mdadm: set /dev/sdb9 faulty in /dev/md4
# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md4 : active raid5 sdb11[2] sdb12[3] sdb10[1] sdb9[4](F) 995712 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] [===========>.........] recovery = 57.0% (285568/497856) finish=0.2min speed=15029K/sec
Create LVM on Top of RAID 5 # pvcreate /dev/md4 Physical volume "/dev/md4" successfully created
# vgcreate vg3 /dev/md4 Volume group "vg3" successfully created
# lvcreate -L 500M -n raid vg3 Logical volume "raid" created
# mke2fs -j /dev/vg3/raid mke2fs 1.40.8 (13-Mar-2008) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 128016 inodes, 512000 blocks 25600 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2032 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Mount the RAID with LVM # mount /dev/vg3/raid /raid
# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 5809368 2700048 2816536 49% / varrun 1037732 104 1037628 1% /var/run varlock 1037732 0 1037732 0% /var/lock udev 1037732 108 1037624 1% /dev devshm 1037732 12 1037720 1% /dev/shm /dev/sda1 474440 49252 400691 11% /boot /dev/sda4 474367664 1738584 448722352 1% /home /dev/mapper/vg3-raid 495844 10544 459700 3% /raid
You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:
Copyright CyberMontana Inc. and BeginLinux.com All rights reserved. Cannot be reproduced without written permission. Box 1262 Trout Creek, MT 59874
|