RAID 5 With LVM on Ubuntu
Server Training - Server Management

RAID 5 provides the best of both worlds in software RAID, speed and redundancy.  You will need at least 3 separate drive partitions on 3 separate drives in order to create RAID 5.  This tutorial also will show you how to install Logical Volume Management on the RAID 5 array.  

You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.  For a tutorial on that CLICK HERE.

You may have to create the RAID device first by indicating the RAID device with the block major and minor numbers.   Be sure to increment the "2" number by one each time you create an additional RAID device.

# mknod /dev/md2 b 9 3

This will create the device if you have already used /dev/md0 and /dev/md1.  Ignore this if this is the first RAID array.

Create RAID 5
This example shows RAID with an extra drive.  This means that 4 devices are used in the array with one of those being an extra so it is available if needed. 


# mdadm --create /dev/md4 --level=5 --spare-devices=1 --raid-devices=3 /dev/sdb9 /dev/sdb10 /dev/sdb11 /dev/sdb12

--create
This will create a RAID array.  The device that you will use for the first RAID array is /dev/md4.

--level=5
The level option determines what RAID level you will use for the RAID.

--spare-devices=1
This adds a spare device into the RAID array.

--raid-devices=3 /dev/sdb9 /dev/sdb10 /dev/sdb11 /dev/sdb12
Note: for illustration or practice this shows 4 partitions on the same drive.  This is NOT what you want to do, partitions must be on separate drives.  However, this will provide you with a practice scenario.  You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk.  The example shows 4 RAID partitions.

mdadm: array /dev/md4 started.



Verify the Process.

# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md4 : active raid5 sdb11[4] sdb12[3](S) sdb10[1] sdb9[0]

995712 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

[==========>..........] recovery = 52.9% (264596/497856) finish=0.2min speed=14699K/sec


Check /var/log/messages
You can also verify that RAID is being built in /var/log/messages.

May 19 10:45:42 ub1 kernel: [ 540.691109] md: bind<sdb9>

May 19 10:45:42 ub1 kernel: [ 540.693369] md: bind<sdb10>

May 19 10:45:42 ub1 kernel: [ 540.695666] md: bind<sdb12>

May 19 10:45:42 ub1 kernel: [ 540.697895] md: bind<sdb11>

May 19 10:45:42 ub1 kernel: [ 540.767398] True protection against single-disk failure might be compromised.

May 19 10:45:42 ub1 kernel: [ 540.767406] raid5: device sdb10 operational as raid disk 1

May 19 10:45:42 ub1 kernel: [ 540.767410] raid5: device sdb9 operational as raid disk 0

May 19 10:45:42 ub1 kernel: [ 540.767941] raid5: allocated 3170kB for md4

May 19 10:45:42 ub1 kernel: [ 540.767953] RAID5 conf printout:

May 19 10:45:42 ub1 kernel: [ 540.767955] --- rd:3 wd:2

May 19 10:45:42 ub1 kernel: [ 540.767959] disk 0, o:1, dev:sdb9

May 19 10:45:42 ub1 kernel: [ 540.767963] disk 1, o:1, dev:sdb10

May 19 10:45:42 ub1 kernel: [ 540.768070] RAID5 conf printout:

May 19 10:45:42 ub1 kernel: [ 540.768079] --- rd:3 wd:2

May 19 10:45:42 ub1 kernel: [ 540.768084] disk 0, o:1, dev:sdb9

May 19 10:45:42 ub1 kernel: [ 540.768088] disk 1, o:1, dev:sdb10

May 19 10:45:42 ub1 kernel: [ 540.768091] disk 2, o:1, dev:sdb11

May 19 10:45:42 ub1 kernel: [ 540.768116] RAID5 conf printout:

May 19 10:45:42 ub1 kernel: [ 540.768119] --- rd:3 wd:2

May 19 10:45:42 ub1 kernel: [ 540.768122] disk 0, o:1, dev:sdb9

May 19 10:45:42 ub1 kernel: [ 540.768125] disk 1, o:1, dev:sdb10

May 19 10:45:42 ub1 kernel: [ 540.768129] disk 2, o:1, dev:sdb11

May 19 10:45:42 ub1 kernel: [ 540.768986] md: recovery of RAID array md4

May 19 10:45:42 ub1 kernel: [ 540.768996] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.

May 19 10:45:42 ub1 kernel: [ 540.769002] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.

May 19 10:45:42 ub1 kernel: [ 540.769015] md: using 128k window, over a total of 497856 blocks.

May 19 10:46:15 ub1 kernel: [ 573.687527] md: md4: recovery done.

May 19 10:46:15 ub1 kernel: [ 573.871341] RAID5 conf printout:

May 19 10:46:15 ub1 kernel: [ 573.871353] --- rd:3 wd:3

May 19 10:46:15 ub1 kernel: [ 573.871356] disk 0, o:1, dev:sdb9

May 19 10:46:15 ub1 kernel: [ 573.871358] disk 1, o:1, dev:sdb10

May 19 10:46:15 ub1 kernel: [ 573.871364] disk 2, o:1, dev:sdb11


Fail a Device.
In order to test your RAID 5 you can fail a disk, remove it and reinstall it.  This is an important feature to practice.

#mdadm /dev/md4 -f /dev/sdb9

mdadm: set /dev/sdb9 faulty in /dev/md4


# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md4 : active raid5 sdb11[2] sdb12[3] sdb10[1] sdb9[4](F)

995712 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

[===========>.........] recovery = 57.0% (285568/497856) finish=0.2min speed=15029K/sec

 

Create LVM on Top of RAID 5
The advantage of Logical Volume Management is that it will allow you to expand your partitions and allow you to take snapshots or backups of working partitions.   For information on creating Logical Volumes on Ubuntu 8.04 CLICK HERE.

# pvcreate /dev/md4
Create the physical volume out of your RAID array.

Physical volume "/dev/md4" successfully created


# vgcreate vg3 /dev/md4
Create your volume group.

Volume group "vg3" successfully created


# lvcreate -L 500M -n raid vg3
Now create the logical volume.  Note that the logical volume is called raid in the example but you can call it anything you like.  Also note the sizes are small for illustration purposes only.  Certainly they will be much larger.

Logical volume "raid" created


# mke2fs -j /dev/vg3/raid

You have to place a file system on your RAID device.  The journaling system ext3 is placed on the device in this example.

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

128016 inodes, 512000 blocks

25600 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67633152

63 block groups

8192 blocks per group, 8192 fragments per group

2032 inodes per group

Superblock backups stored on blocks:

8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409


Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done


This filesystem will be automatically checked every 29 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.


Mount the RAID with LVM
In order to use the RAID array you will need to mount it on the file system.  For testing purposes you can create a mount point and test.  To make a permanent mount point you will need to edit /etc/fstab.

# mount /dev/vg3/raid /raid


# df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda2 5809368 2700048 2816536 49% /

varrun 1037732 104 1037628 1% /var/run

varlock 1037732 0 1037732 0% /var/lock

udev 1037732 108 1037624 1% /dev

devshm 1037732 12 1037720 1% /dev/shm

/dev/sda1 474440 49252 400691 11% /boot

/dev/sda4 474367664 1738584 448722352 1% /home

/dev/mapper/vg3-raid 495844 10544 459700 3% /raid

 

You should be able to create files on the new partition.  If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md4               /raid                            defaults           0     2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device. 

 


Copyright CyberMontana Inc. and BeginLinux.com
All rights reserved. Cannot be reproduced without written permission. Box 1262 Trout Creek, MT 59874