NVME / SSD block cache – bcache vs lvmcache benchmark
Why care about IO performance?
Recently I’ve acquired some new hardware and I want it to perform as fast as possible. The setup is quite trivial for a home desktop nevertheless I wanted it to excel on IO performance as it will be use as my backup server too. A common way to improve performance is by adding a cache system, this applies to many things in IT and block devices are no exception.
The relevant hardware components for this post are the 7x 2TB drives and 1x NVME card. The setup is not ideal as the models are not all the same, some to perform better than others some are newer and others older. Nevertheless money and storage capacity was important also I wanted to use them anyway. Security is also very important, so all the data written to these drives (including the NVME card) must be encrypted. On the other side I want to be able to expand the raid devices when the time comes so I also use LVM, as file system I use XFS with the default settings.
You may wonder why didn’t I use a simpler setup with BTRFS or ZFS? Mostly because I wanted to use raid 5 or 6 and on BTRFS the stability is still and issue on this form of raid. On the other hand with ZFS it would be difficult to grow to the pool in the future.
The logical setup is as follows
- 1x raid 5 with 6 drives (+1 hot spare)
- LVM on top of the raid device
- Cache device or Logical volume
- Block encryption layer – LUKS
- File system – XFS
The hardware list
- NVME Samsung SSD 960 PRO 512GB
- ST2000VN004-2E4164
- ST2000VN004-2E4164
- ST2000VN004-2E4164
- WDC WD2003FYPS-27Y2B0
- WDC WD200MFYYZ-01D45B1
- WDC WD20EZRX-00D8PB0
- WDC WD2000FYYZ-01UL1B1
The NVME device is used both to the OS, Home, etc, but it does contain a LVM logical volume to be used as cache for the raid device. The number of IOPS / bandwidth the NVME is rather high, it goes all the way up to 440.000 IOPS and a bandwidth of 3.5GB/s, which is quite insane and I won’t be able to exhaust with my day to day use, so it can spare a few IOPS to make my backups go a bit faster.
I’ve tested bcache and lvmcache, as a benchmark tool I’ve used iozone. I’ve done the tests with 256kB,1MB,8MB block sizes, the test file is 96GB (as it needs to be bigger than the total ram amount 64GB).
The initial test was made using the full setup without any caching system, it will be used to set a base of comparison.
Each test was done with 3 different block sizes 256K, 1MB, 8MB, test settings and cache mode for all the test is “writeback”:
- md device, raid 5
- lvm volume
- luks
- xfs
Results
Using no cache
Test setup
- MD Raid 5
- LVM lv data
- Luks
- XFS

Conclusions
In what regards to overall performance the outcome is not as expected. LVM cache really didn’t seem to improve the system performance. In some of the tests it was quite slower than the no cache mdraid and in some other just slightly faster. Nevertheless bcache did show real improvement being faster in all the tests, some by more than 30%.
Although bcache improves the system, it’s also the most difficult system to setup, lvmcache is totally integrated in LVM tools and in the kernel, bcache requires the installation of bcache-tools as it’s not a default on most distributions.
If you fell comfortable with Linux, block devices, mdraid and LVM I would recommend it without worries, if you’re not familiar with this set of tools I would recommend you to test your setup before you run it in a server / desktop environment.
The performance benefits are worth the extra work.
Test raw report files
Bellow are the iozone generated reports and the ods spreadsheet I used to build the graphs.