Testing BTRFS – Performance comparison on a high performance SSD (BTRfs vs Ext4)

Hi,
Today I was reading about btrfs and as never used it before I thought in giving it a try.
On my laptop I have a ssd with 256GB, there I created 2 LVM2 volumes to use and test btrfs.
It’s not the ideal solution because there’s a LVM layer but I’m not in the mood for backup,erasing,installing,erasing and installing. So the tests I’m going to do are just on the FS itself, not on all the layers that btrfs supports. A good thing in using a ssd card is that the access time is equal for all the block device and the data position on the block device is not accountable, so this is a very good opportunity to have measurements both on ext4 and btrfs.
Here’s the benchmark architecture, tools and setup:

Kernel:

Linux MartiniMan-LAP 2.6.38-31-desktop #1 SMP PREEMPT 2011-04-06 09:01:38 +0200 x86_64 x86_64 x86_64 GNU/Linux

LVM lv creation command:

lvcreate -L 20G -n TestingBTRfs /dev/mapper/system
lvcreate -L 20G -n TestingExt4fs /dev/mapper/system

LVM lvdisplay output:

--- Logical volume ---
LV Name /dev/system/TestingBTRfs
VG Name system
LV UUID zBYf0d-metk-VC9U-YkjE-z1Ts-NMLb-HzYmrJ
LV Write Access read/write
LV Status available
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/system/TestingExt4fs
VG Name system
LV UUID FJEfiv-Hs9W-zGuV-sJIo-3INN-gh52-YgmsVl
LV Write Access read/write
LV Status available
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

FS creation command:

mkfs.ext4 /dev/system/TestingExt4fs
mkfs.btrfs /dev/system/TestingBTRfs

Processor:

model name : Intel(R) Core(TM)2 CPU T7200 @ 2.00GHz

HardDrive:

Device Model: SAMSUNG MMDOE56G5MXP-0VB

Mount command (as you can see I didn’t do any optimizations, noatime, etc):

/dev/mapper/system-TestingBTRfs on /mnt/btrf type btrfs (rw)
/dev/mapper/system-TestingExt4fs on /mnt/ext4 type ext4 (rw)

Test software:

'Iozone' Filesystem Benchmark Program

Version $Revision: 3.373 $
Compiled for 64 bit mode.

Command line for the tests:
Command line used:

 ./iozone -Ra -r4k -r8k -r16k -r32k -r64k -r128 -r1024 -r4096k -r16384k -s1g

This command was used in the btrfs and the ext4 volumes.
The options mean:
-R excel/

office compatible format.
-a auto test
-r the record size (you can see I used several (4k,8k…))
-s size of test file (I used 1GB)

Here’s the test results:

And the charts (The scale is logarithmic):

(you may download the data here: [download id=”1″])

Conclusions

As you can see on the charts for sequential reading/writing there’s a performance gain  in BTRfs with the smaller record sizes but the inverse is also true, EXT4 has more performance on larger record sizes.

If you look to the random data access while reading or writing you’ll see that EXT4 is far faster that BTRfs, and this is according to my daily usage pattern would be 70% of the access to my hard drive. To be sincere I’m a bit surprised on such a difference. I know I didn’t tune any of the file systems and the purpose of this benchmark is not having to, just playing with the defaults as most of the installations out there.

Another conclusion that is really simple to understand is that bigger record sizes mean best performance.

For now I think I’ll stick to EXT4 and LVM, who knows if I’ll sometime soon I’ll change to BTRFS, I’ll let it grow and advise you to do the same.

Cheers,

Pedro Oliveira

Why use a redundant swap space or partition

Today I’ve had a problem in one of the servers we support, no web access, no ssh, and no console just a bunch of sentences passing so fast I couldn’t read it on the terminal. The solution a simple hard reset and the system came online, it was a hard disk failure but the system online without trouble because we were using a raid configuration. One of the disks didn’t show up in the RAID array, a few tests later and declared the hardware fault the cause of the downtime.

But why did the system came down because of a disk failure if there was a RAID system available, simple the swap was spread among the disks but not in a raid system so no redundant swap partitions, when the need for data in the swap of that file system came there wasn’t any data available and the system came to a stop.

From now on we’ll create a redundant swap partition using a RAID volume so this doesn’t happen again as a server should never stop because of a disk problem,  living and learning.

Cheers,

Pedro M. S. Oliveira

BTW – to reassemble the array I used mdadm, bellow there is a simple usage if you want to reassemble a previous build array:

mdadm –manage  /dev/md0 –add /dev/sda1

this command will add the partition /dev/sda1 to the raid array /dev/md0

if you want to learn more about RAID in linux just type man mdadm or mdadm –help

Click to access the login or register cheese