Testing BTRFS – Performance comparison on a high performance SSD (BTRfs vs Ext4)

Today I was reading about btrfs and as never used it before I thought in giving it a try.
On my laptop I have a ssd with 256GB, there I created 2 LVM2 volumes to use and test btrfs.
It’s not the ideal solution because there’s a LVM layer but I’m not in the mood for backup,erasing,installing,erasing and installing. So the tests I’m going to do are just on the FS itself, not on all the layers that btrfs supports. A good thing in using a ssd card is that the access time is equal for all the block device and the data position on the block device is not accountable, so this is a very good opportunity to have measurements both on ext4 and btrfs.
Here’s the benchmark architecture, tools and setup:


Linux MartiniMan-LAP 2.6.38-31-desktop #1 SMP PREEMPT 2011-04-06 09:01:38 +0200 x86_64 x86_64 x86_64 GNU/Linux

LVM lv creation command:

lvcreate -L 20G -n TestingBTRfs /dev/mapper/system
lvcreate -L 20G -n TestingExt4fs /dev/mapper/system

LVM lvdisplay output:

--- Logical volume ---
LV Name /dev/system/TestingBTRfs
VG Name system
LV UUID zBYf0d-metk-VC9U-YkjE-z1Ts-NMLb-HzYmrJ
LV Write Access read/write
LV Status available
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/system/TestingExt4fs
VG Name system
LV UUID FJEfiv-Hs9W-zGuV-sJIo-3INN-gh52-YgmsVl
LV Write Access read/write
LV Status available
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

FS creation command:

mkfs.ext4 /dev/system/TestingExt4fs
mkfs.btrfs /dev/system/TestingBTRfs


model name : Intel(R) Core(TM)2 CPU T7200 @ 2.00GHz



Mount command (as you can see I didn’t do any optimizations, noatime, etc):

/dev/mapper/system-TestingBTRfs on /mnt/btrf type btrfs (rw)
/dev/mapper/system-TestingExt4fs on /mnt/ext4 type ext4 (rw)

Test software:

'Iozone' Filesystem Benchmark Program

Version $Revision: 3.373 $
Compiled for 64 bit mode.

Command line for the tests:
Command line used:

 ./iozone -Ra -r4k -r8k -r16k -r32k -r64k -r128 -r1024 -r4096k -r16384k -s1g

This command was used in the btrfs and the ext4 volumes.
The options mean:
-R excel/

office compatible format.
-a auto test
-r the record size (you can see I used several (4k,8k…))
-s size of test file (I used 1GB)

Here’s the test results:

And the charts (The scale is logarithmic):

(you may download the data here: [download id=”1″])


As you can see on the charts for sequential reading/writing there’s a performance gain¬† in BTRfs with the smaller record sizes but the inverse is also true, EXT4 has more performance on larger record sizes.

If you look to the random data access while reading or writing you’ll see that EXT4 is far faster that BTRfs, and this is according to my daily usage pattern would be 70% of the access to my hard drive. To be sincere I’m a bit surprised on such a difference. I know I didn’t tune any of the file systems and the purpose of this benchmark is not having to, just playing with the defaults as most of the installations out there.

Another conclusion that is really simple to understand is that bigger record sizes mean best performance.

For now I think I’ll stick to EXT4 and LVM, who knows if I’ll sometime soon I’ll change to BTRFS, I’ll let it grow and advise you to do the same.


Pedro Oliveira

Using a recover CD to restore a backup made with BackupPC – BackupPC as disaster recovery

Sometimes things go wrong. We simply can’t avoid it, a simple power failure can harm your data and corrupt your system.

One of these day in a normal work day one small server I maintain add an hard disk failure (yes it’s true, it happened again for the 3rd time this month). In this system I don’t have a RAID setup so the data was lost, well no prob I thought in the end all day is on my backuppc server.

BackupPC is one of my favorite tools, it’s great to manage, easy and very flexible, I’m not going the write about using backuppc to backup data as there are plenty of docs and mailing lists out there that can give you excellent how to(s) on the subject.

Booted with OpenSuSE 11.1 DVD and selected rescue mode.

On the command prompt and using fdisk /dev/sda I partitioned the drive like the old one (both drives were sata II), but Linux is so flexible that you don’t even need to do that.

Usually I like to use a volume manager (lvm) but I was short on time and will so just created 3 partitions /sda1 (150MBfor /boot), sda2 (4GB for swap ) and sda3(100GB for /), leaving unpartitioned¬† the rest (400GB), I’ll be using the free space to create volumes afterwards and then move the data there.

Then formated the partitions:

mkswap /dev/sda2

mkfs.ext3 /dev/sda1

mkfs.ext3 /dev/sda3

After this I mounted the filesystems like this:

mount /dev/sda3 /mnt

created boot in /mnt – mkdir /mnt/boot

mount /dev/sda1 /mnt/boot

so now we need to get all the data in the file system… and this is the tricky part we need a ssh server to do this (we can use nfs or http download and then untar, but I still like sshd method better, it uses rsync so the transfer is really fast.)

To do this you need to set up a ssh server from a minimalistic boot system. This isn’t hard just follow the steps:

First give this machine your old ip address ex.: ifconfig eth0

Create sshd certificate, remember this certificate is just temporary so you can restore your backup.  You may delete it afterwards. To create it just type:

ssh-keygen -t rsa -f /mnt/ssh_host_rsa_key -N “”

start sshd by typing:

/usr/sbin/sshd -h /mnt/ssh_host_rsa_key

This will start up sshd with all the default options.

Now just give a password to your user root or you won’t be able to login:

passwd root

Add the backuppc public ssh key from the backup server to /root/.ssh/authorized_keys on the restore machine.

Finally accept host key on the backuppc key (you may do this by entering on the backuppc server and access the restore machine, it will ask you to had the machine key. Just accept it.) Then copy it to the backuppc user know hosts file ex.:

tail -n ~/.ssh/known_hosts >> ~backuppc/.ssh/known_hosts

Finally your done. If you find this large and complicated don’t think it’s like that, by now you may have configured and entire ssh daemon by hand.

Go to the BackupPC console, choose your host, select the backup you want and just press restore.

On the method choose rsync but on the destination dir choose /mnt. Go out and take a coffe the restore can take a while. After it’s done all you need is to reconfigure grub and maybe /etc/fstab.

Now that the restore is done just check if /mnt/etc/fstab reflects the partition scheme, change accordantly if it doesn’t.

Finally we need to setup grub edit /mnt/boot/grub/menu.lst and check if your root partition is on the right place.

Before you can run grub-install you need to mount 2 special partitions /dev and /proc, how do you do this on a mounted and running system? The answer:

mkdir /mnt/proc; mkdir /mnt/dev; mkdir /dev/sys

mount -o bind /proc /mnt/proc

mount -o bind /dev /mnt/dev

chroot /mnt

and finally the last command:

grub-install /dev/sda

if you got and ok just reboot your system, don’t forget to eject the dvd before system boots again.

I think this was my largest post, hope you find it useful


Pedro Oliveira

Click to access the login or register cheese