I recently had to recreate a new postgresql host for our production environment. Our application is exceptionally read heavy on the DB layer, so a bare bones postgres installation is not going to cut it with regards to the performance I need to get.

One way to help with read/writes on the DB host, is to prepare a decent RAID array. I opted to use RAID-10, which provides a stripe and mirror, so I also have some peace of mind should a volume get dropped for whatever reason.

RAID 10

RAID 10 also known as RAID 1+0, combines disk mirroring and disk striping to protect data. A RAID 10 configuration requires a minimum of four disks, and stripes data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved.

Install setup

Make sure you have mdadm installed:

sudo apt-get update
sudo apt-get install mdadm

Attached the volumes required for your array. For RAID 10 we need a minimum of 4 drives and equal size. Do not mount the volumes though

To get the device names run fdisk -l

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     4096 16773119 16769024   8G 83 Linux

Disk /dev/xvdd: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/xvde: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/xvdb: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/xvdc: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/xvdf: 150 GiB, 161061273600 bytes, 314572800 sectors

or

# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE   MOUNTPOINT
xvda    202:0    0   10G  0 disk   
xvdb    202:16   0   50G  0 disk   
xvdc    202:32   0   50G  0 disk   
xvdd    202:48   0   50G  0 disk   
xvde    202:64   0   50G  0 disk   
xvdf    202:80   0  150G  0 disk   

Prepare the volumes

We need to run fdisk to prepare the disks for the array. Be sure to use the volumes intended for your array:

# fdisk /dev/xvdb
.. follow fdisk prompts...
# fdisk /dev/xvdc
..
# fdisk /dev/xvdd
..
# fdisk /dev/xvde

fdisk options:

Command (m for help): m

Help:

  Generic
   d   delete a partition
   l   list known partition types
   n   add a new partition
   p   print the partition table
   t   change a partition type
   v   verify the partition table

  Misc
   m   print this menu
   x   extra functionality (experts only)

  Save & Exit
   w   write table to disk and exit
   q   quit without saving changes

  Create a new label
   g   create a new empty GPT partition table
   G   create a new empty SGI (IRIX) partition table
   o   create a new empty DOS partition table
   s   create a new empty Sun partition table

Order of things to do:

Changed type of partition ‘Linux’ to ‘Linux raid autodetect’.

* p - print partition table to verify

Command (m for help): p Disk /dev/xvdb: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x9bb7e5d5

Device Boot Start End Sectors Size Id Type /dev/xvdb1 2048 104857599 104855552 50G fd Linux raid autodetect

* w - save and exit

Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.


In our case we want to select the Hex code `fd` (`Linux raid auto`), or depending on your distribution, we want `Linux raid autodetection` or the equivalent. Be sure to double check the label.

Repeat this process for all the volumes you need for the RAID.

### Create the array
When you are ready to create the array. `mdadm` makes this process really simple.The `--level=0` option determines which RAID level we are creating. Depending on your use case, change this value to suite accordingly.

$ sudo mdadm –create –verbose /dev/md0 –level=0 –name=MY_RAID –raid-devices=number_of_volumes device_name1 device_name2

Lets see how it would look with real device names:

$ sudo mdadm –create –verbose /dev/md0 –level=10 –name=db-data –raid-devices=4 /dev/xvdd1 /dev/xvde1 /dev/xvdb1 /dev/xvdc1 mdadm: layout defaults to n2 mdadm: layout defaults to n2 mdadm: chunk size defaults to 512K mdadm: size set to 52394496K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.


Run `cat /proc/mdstat` to view the info about the array:

cat /proc/mdstat

Personalities : [raid10] md0 : active raid10 xvdc1[3] xvdb1[2] xvde1[1] xvdd1[0] 104788992 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices:


### Filesytems and Mount points

Once `mdadm` has done its stuff, we still need to format the filesytem and create mount points before we can mount the array:

I opted to use ext4 as the filesystem for the array:

mkfs.ext4 /dev/md0


And then make the mount point, at the desired location:

mkdir /mnt/raid


Then we should add the volume and mountoint to `/etc/fstab`, with your required mount permissions:

nano /etc/fstab . . /dev/md0 /mnt/raid ext4 defaults 1 2


We can now mount using `mount -a`

### Finishing up
We can run `lsblk` to verify the volumes are configured and mounted correctly:

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 10G 0 disk
└─xvda1 202:1 0 8G 0 part / xvdb 202:16 0 50G 0 disk
└─xvdb1 202:17 0 50G 0 part
└─md0 9:0 0 100G 0 raid10 /mnt/raid xvdc 202:32 0 50G 0 disk
└─xvdc1 202:33 0 50G 0 part
└─md0 9:0 0 100G 0 raid10 /mnt/raid xvdd 202:48 0 50G 0 disk
└─xvdd1 202:49 0 50G 0 part
└─md0 9:0 0 100G 0 raid10 /mnt/raid xvde 202:64 0 50G 0 disk
└─xvde1 202:65 0 50G 0 part
└─md0 9:0 0 100G 0 raid10 /mnt/raid xvdf 202:80 0 150G 0 disk
└─xvdf1 202:81 0 150G 0 part /mnt/backup


### Benchmark your array!

A simple way to test the performance of your array is to use `dd` eg:

dd if=/dev/zero of=/mnt/raid/test bs=4096 count=1000


You should see some out put like this:

1000+0 records in 1000+0 records out 4096000 bytes (4.1 MB) copied, 0.00317798 s, 1.3 GB/s ```

And thats it! Really nothing to it. Just remember, there are two kinds of people in the world. Those who backup, and those who will backup..eventually. When working with data and volumes be 100% sure that you are working on the correct volumes.

If you get stuck at all or need help, feel free to reach out to me!