by Pedro M. S. Oliveira | Jul 10, 2017 | Linux

Why care about IO performance?
Recently I’ve acquired some new hardware and I want it to perform as fast as possible. The setup is quite trivial for a home desktop nevertheless I wanted it to excel on IO performance as it will be use as my backup server too. A common way to improve performance is by adding a cache system, this applies to many things in IT and block devices are no exception.
The relevant hardware components for this post are the 7x 2TB drives and 1x NVME card. The setup is not ideal as the models are not all the same, some to perform better than others some are newer and others older. Nevertheless money and storage capacity was important also I wanted to use them anyway. Security is also very important, so all the data written to these drives (including the NVME card) must be encrypted. On the other side I want to be able to expand the raid devices when the time comes so I also use LVM, as file system I use XFS with the default settings.
You may wonder why didn’t I use a simpler setup with BTRFS or ZFS? Mostly because I wanted to use raid 5 or 6 and on BTRFS the stability is still and issue on this form of raid. On the other hand with ZFS it would be difficult to grow to the pool in the future.
The logical setup is as follows
- 1x raid 5 with 6 drives (+1 hot spare)
- LVM on top of the raid device
- Cache device or Logical volume
- Block encryption layer – LUKS
- File system – XFS
The hardware list
- NVME Samsung SSD 960 PRO 512GB
- ST2000VN004-2E4164
- ST2000VN004-2E4164
- ST2000VN004-2E4164
- WDC WD2003FYPS-27Y2B0
- WDC WD200MFYYZ-01D45B1
- WDC WD20EZRX-00D8PB0
- WDC WD2000FYYZ-01UL1B1
The NVME device is used both to the OS, Home, etc, but it does contain a LVM logical volume to be used as cache for the raid device. The number of IOPS / bandwidth the NVME is rather high, it goes all the way up to 440.000 IOPS and a bandwidth of 3.5GB/s, which is quite insane and I won’t be able to exhaust with my day to day use, so it can spare a few IOPS to make my backups go a bit faster.
I’ve tested bcache and lvmcache, as a benchmark tool I’ve used iozone. I’ve done the tests with 256kB,1MB,8MB block sizes, the test file is 96GB (as it needs to be bigger than the total ram amount 64GB).
The initial test was made using the full setup without any caching system, it will be used to set a base of comparison.
Each test was done with 3 different block sizes 256K, 1MB, 8MB, test settings and cache mode for all the test is “writeback”:
- md device, raid 5
- lvm volume
- luks
- xfs
Results
Using no cache
Test setup
- MD Raid 5
- LVM lv data
- Luks
- XFS
Using lvmcache
Test setup
- MD Raid 5
- LVM lv data
- LVM lv meta
- LVM lv cache
- LVM lv cache pool
- Luks
- XFS

Using bcache
Test setup
- MD RAID5
- LVM LV data
- LVM LV cache
- bcache volume
- Luks
- XFS

Test results – Benchmark graph

Conclusions
In what regards to overall performance the outcome is not as expected. LVM cache really didn’t seem to improve the system performance. In some of the tests it was quite slower than the no cache mdraid and in some other just slightly faster. Nevertheless bcache did show real improvement being faster in all the tests, some by more than 30%.
Although bcache improves the system, it’s also the most difficult system to setup, lvmcache is totally integrated in LVM tools and in the kernel, bcache requires the installation of bcache-tools as it’s not a default on most distributions.
If you fell comfortable with Linux, block devices, mdraid and LVM I would recommend it without worries, if you’re not familiar with this set of tools I would recommend you to test your setup before you run it in a server / desktop environment.
The performance benefits are worth the extra work.
Test raw report files
Bellow are the iozone generated reports and the ods spreadsheet I used to build the graphs.
iozone_test_without_cache
iozone_test_with_cache_bcache
iozone_test_with_cache_lvmcache
Benchmark results
by Pedro M. S. Oliveira | Jul 20, 2014 | Linux

Nowadays setting up an encrypted file system is something that can be achieved in a matter of minutes, there’s a small drop in FS performance but it’s barely noticeable and the benefits are countless.
All the major distributions allow you to conveniently setup the encrypted volume during the installation and that is very convenient your for you laptop/desktop, nevertheless on the server-side these options are often neglected.
With this how to you’ll be able to set up your encrypted LVM volume in your CentOS 7 in 8 easy steps and less than 15 minutes.
I’m assuming that you’re running LVM already, and that you have some free space available on your volume group (in this case 249G):
The steps:
lvcreate -L249G -n EncryptedStorage storage
skip the shred command if you just have 15 minutes, look at the explanation bellow to see if you’re willing to do so.
shred -v –iterations=1 /dev/storage/EncryptedStorage
cryptsetup –verify-passphrase –cipher aes-cbc-essiv:sha256 –key-size 256 luksFormat /dev/storage/EncryptedStorage
cryptsetup luksOpen /dev/storage/EncryptedStorage enc_encrypted_storage
mkfs.ext4 /dev/mapper/enc_encrypted_storage
Edit /etc/cryptotab and add the following entry:
enc_encrypted_storage /dev/storage/EncryptedStorage none noauto
Edit /etc/fstab and add the following entry:
/dev/mapper/enc_encrypted_storage /encrypted_storage ext4 noauto,defaults 1 2
Finally mount your encrypted volume
mount /encrypted_storage
After reboot you’ll need to run these two commands to have your encrypted filesystem available on your CentOS 7 system:
cryptsetup luksOpen /dev/storage/EncryptedStorage enc_encrypted_storage
mount /encrypted_storage
Now the steps explained.
Step 1:
lvcreate -L249G -n EncryptedStorage storage
I’ve created a volume with 249GB named EncryptedStorage on my volume group storage (each distribution has a naming convention for the volume group name, so you better check yours, just type:
vgdisplay
The output:
— Volume group —
VG Name storage
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 499.97 GiB
PE Size 32.00 MiB
Total PE 15999
Alloc PE / Size 15968 / 499.00 GiB
Free PE / Size 31 / 992.00 MiB
VG UUID tpiJO0-OR9M-fdbx-vTil-2dty-c7PF-xxxxxx
— Volume group —
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 23.51 GiB
PE Size 4.00 MiB
Total PE 6018
Alloc PE / Size 6018 / 23.51 GiB
Free PE / Size 0 / 0
VG UUID sncB8Z-0Upw-VrwH-DOPJ-hELz-377f-yyyyy
As you can see I have 2 volume groups, one installed by default on all VMs and it’s called centos, and another one installed by me called storage, in the how to I’m using the storage volume group.
Step 2:
shred -v –iterations=1 /dev/storage/EncryptedStorage
This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to make sure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data.
You may omit this step although not recommended.
Step 3:
cryptsetup –verify-passphrase –cipher aes-cbc-essiv:sha256 –key-size 256 luksFormat /dev/storage/EncryptedStorage
On this step we format the volume with our selected block cypher, in this case I’m using AES encryption with CBC mode, essiv IV and 256 bits key.
A block cipher is a deterministic algorithm that operates on data blocks and allows encryption and decryption of bulk data. The block cipher mode describes a way the block cipher is repeatedly applied on bulk data to encrypt or decrypt the data securely. An initial vector is a block of data used for ciphertext randomization. IV ensures that repeated encryption of the same plain text provides different ciphertext output. IV must not be reused with the same encryption key. For ciphers in CBC mode, IV must be unpredictable, otherwise the system could become vulnerable to certain watermark attacks (and this is the reason for the sha256).
Step 4:
cryptsetup luksOpen /dev/storage/EncryptedStorage enc_encrypted_storage
Here we assign and open the encrypted volume to a device that will mapped using device mapper, after this step you will be able to do regular block device operations like on any other lvm volume.
Step 5:
mkfs.ext4 /dev/mapper/enc_encrypted_storage
Format the volume with the default ext4 settings, you may use whatever flags you wish though.
Step 6:
Edit /etc/crypttab and the following line:
enc_encrypted_storage /dev/storage/EncryptedStorage none noauto
With this line we will permanently enable /dev/storage/EncryptedStorage volume assignment to the enc_encrypted_storage mapped device.
The noauto setting is important to the server boot correctly if the blockdevice password is not entered during the boot process, this will enable you to use your custom script or manually insert the password in a later stage using ssh.
Step 7:
Edit /etc/fstab and add the following entry:
/dev/mapper/enc_encrypted_storage /encrypted_storage ext4 noauto,defaults 1 2
This is where we map the previously mapped device to a mount point, in this case /encrypted_storage, the noauto value is set due to the same reasons as in step 5.
Step 8
mount /encrypted_storage
Simple mount command, you’ll be able to store and access your files in /encrypted_storage, it will be a good place for the files you want to keep private on your CentOS system.
You may find more information about supported cyphers and options on Redhat documentation:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/
Cheers,
Pedro Oliveira