pv – Concatenate files, or stdin, to stdout, with monitoring

PV - botleneck

Few days ago I had the need to debug the output of a stream, the problem was that the output is bandwidth is not always constant and that seamed to affect the input application didn’t behave the same with different workloads.

A colleague told me to check pv. As I have never used it before I checked the man page first and it seamed promising. Bellow there are some details about it.

pv can be used to:

progress show progress bar
timer show elapsed time
eta show estimated time of arrival (completion)
rate show data transfer rate counter
average-rate show data transfer average rate counter
bytes show number of bytes transferred
format FORMAT set output format to FORMAT
numeric output percentages, not visual information

you can also change the standard behaviour of a pipe (and probably this is the most interesting part), you’ll be able to:

rate-limit RATE limit transfer to RATE bytes per second
buffer-size BYTES use a buffer size of BYTES
skip-errors skip read errors in input
stop-at-size stop after –size bytes have been transferred

Here are 3 pv usage examples:

Limit bw available within a pipe:

In this case I’ll limit the write of a file to 1024MB/s, while writing a 10MB file (please note that the limits on both dd and pv are in bytes)

dd count=10 bs=1048576 if=/dev/zero | pv -L 1048576 | dd of=to_delete.file
10+0 records in [1021kiB/s] [ <=> ]
10+0 records out
10485760 bytes (10 MB) copied, 9.84052 s, 1.1 MB/s
10MiB 0:00:09 [1.01MiB/s] [ <=> ]
20400+100 records in
20480+0 records out
10485760 bytes (10 MB) copied, 9.93452 s, 1.1 MB/s

Write only a 5 MB file from the pipe:

dd count=10 bs=1048576 if=/dev/zero | pv -S -s 5242880 | dd of=to_delete.file
5MiB 0:00:00 [92.4MiB/s] [===========================================>] 100%
10240+0 records in
10240+0 records out
5242880 bytes (5.2 MB) copied, 0.073623 s, 71.2 MB/s

Increase the buffer size for faster transfers

With default buffers (512KB):

dd count=500 bs=1048576  if=/dev/zero | pv | dd of=/dev/null
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 0.684948 s, 765 MB/s
500MiB 0:00:00 [ 731MiB/s] [ <=>                                                                       ]
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 0.683847 s, 767 MB/s

With a bigger buffer (5MB):

dd count=500 bs=1048576  if=/dev/zero | pv -B 5242880 | dd of=/dev/null
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 0.667252 s, 786 MB/s
500MiB 0:00:00 [ 750MiB/s] [ <=>                                                                       ]
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 0.667482 s, 785 MB/s

If you want to have more information or check some other use cases you may check this post on cyberciti.

See you next time,

Pedro Oliveira



MySQL-ZRM and BackupPC – CentOS 7

MySQL-ZRM and BackupPC for the resque

Backups can be a tricky thing, all of us that did system administration, maintenance, or system engineering or architecture at some point had to choose a backup mechanism, that depending on the requirements can be a simple bash script that uses tar or rsync, or a robust solution like BackupPC or Bacula, backup appliances and so on.

Today while reading the BackuPC mail list someone asked about the best way to use it backup a MySQL DB, as always a multitude of options, one of my favourite ones is using MySQL-ZRM and BackupPC. I’m a fan boy of BackupPC, I’ve used it for years both in personal projects as in different enterprise projects, I’m not going to describe how to install or how to make BackupPC run on your system. There is a lot of online information about this (just check BackupPC home page).

Although BackupPC is a great tool it won’t guarantee the status of your databases on the moment of the copy, for that you need another tool, my favourite one is MySQL-ZRM. MySQL-ZRM will make sure that your new MySQL or mariadb backup is consistent, this backup can be retrieved by BackupPC and stored in the backup server.


Installing MySQL-ZRM on CentOS 7

As the title of the post says I’ll be using CentOS 7, so the first thing I need is to install the Epel repo on my CentOS 7 server:

rpm -Uvh https://ftp.fau.de/epel/7/x86_64/e/epel-release-7-1.noarch.rpm

Now that we have the repo installed we need to install MySQL-ZRM

yum install -y MySQL-zrm

Considerations on MySQL-ZRM on CentOS 7

There are two main differences in the configuration, the mode of the backup that can be:

  • RAW
  • Logical

Raw mode will make sure you that you’ll have the best performance possible during the backup, nevertheless it will need that you use LVM and I would only advice you to use it if you’re familiar with the concept. To start with you should have a logical volume for your mysql data dir (usually /var/lib/mysql/), then you should have available space on your volume group. At least double the space that you would need for MySQL operation during the backup, but please be generous here as if your ran out of space you will truncate your DBs. On the other hand the considerations over the performance may not be true as they will vary with your use case, RAW will make sure you that there were will be not locks on the DB during the time of the backup. If you really need performance to be unaltered during the seconds or minutes of the backup I would recommend a master/slave setup where you would to the backups from the slave host, thus not impacting the master.

The Logical backup mode doesn’t have any special requirements, nevertheless you’ll be “write locking” the tables during the time of the backup, with recent hardware even big backups can be fast, but if you are talking of a 200GB DB miracles won’t happen, in this cases I would recommend the RAW mode.


Setting up your MySQL server to make it suitable for MySQL-ZRM

To make your MySQL server suitable for MySLQ-ZRM you need to create a user with the right set of permission, also if you are not backing up data on the same server that you’re running MySQL-ZRM you need to enable TCP on your mysql.

Create mysql user with the correct set of permissions

mysql -h localhost -p # or whatever IP or hostname where your MySQL lives

grant select, insert, update, create, drop, reload, shutdown, alter, super, lock tables, replication client on *.* to ‘backupuser’@’localhost’ identified by ‘very secret password‘;

Setting up MySQL-ZRM on CentOS 7

After installing MySQL-ZRM we need to set it up, to do this we need to edit its configuration:

The config file is located at:


In this example we will use the Logical backup mode the main configuration changes are:

destination=/var/lib/mysql-zrm # backups destination folder (can be a NFS share, smb share, usb mount point, etc)
retention-policy=15D # How many days to keep the backup on the destination folder.
compress=1 # compress backups 1 = enabled, 0 = disabled
compress-plugin=/usr/bin/gzip # you’re able
all-databases=1 # do you want to backup all the databases on the mysql server? In this case we do
user=”backupuser” # authorized user to backup your databases
password=”very secret password” # the password
host=”your.server.hostname” # server host name
routines=1  # do we want to backup MySQL routines? In this case yes
verbose=0 # do we want the log to be verbose
mailto=”backup-list@linux-geex.com” # backup admin email, if you have a local MTA correctly configured you’ll receive an email if backups didn’t finish properly, this will depend on the email policy described below

If you are backing up a remote server you’ll also need to enable TCP transfers on my.cnf, this can be achieved by setting on the [mysql] section:

port = 3306

Please keep in mind that you should be very careful when exposing MySQL, so set your iptables firewall to only allow IP connections to the backup server and other desired mysql clients, bellow is an example of how to do it:

iptables -I INPUT -m tcp -p tcp –dport 3306 -i eth0 -s -m comment -j ACCEPT –comment “Allow access to web server”

iptables -I INPUT -m tcp -p tcp –dport 3306 -i eth0 -s -m comment -j ACCEPT –comment “Allow access to MySQL-ZRM server”

Where –dport is destination port, -i eth0 is the interface where you want the filter to be active (you may skip it and it will be active in all the interfaces), -s IP are the allowed IPs, and -j ACCEPT is the target for the rule, in this case ACCEPT the package.


Setting up MySQL-ZRM backup frequency

MySQL-ZRM uses cron to do the backups, so the frequency is the one defined in the cron entry, many just use root crontab to do everything, although this is possible it’s not the most correct way of doing it.

Again there are multiple possibilities of doing this:

  • Use mysql-zrm-scheduler, this is a tool that will help you create the crontab entry with the correct parameters, you can check how it works just by typing  mysql-zrm-scheduler on the command line.
  • Edit the crontab entry directly if you know the parameters (my favourite and all the parameters are also very well documented)

For a once a day backup of your database you would need to create the following file:


With the following content:

0 1 * * * root /usr/bin/zrm-pre-scheduler –action backup –backup-set `hostname -s` –backup-level 0 –interval daily

0 3 * * * root /usr/bin/mysql-zrm –action purge

This will trigger a backup every night at 1:00 AM, and it will also trigger a purge of the old content at 3:00 AM, please not that if you’re backing up another server than localhost you should replace hostname -s for the FQDN of the desired server.


Integrating with BackupPC

Integration may be achieve by 2 distinct means:

  1. Let BackupPC retrieve the files from the destination folder specified above, easiest and probably will suit most setups.
  2. Trigger backup execution within BackupPC.

I’ll focus on the second option as the first one is enabled by default if you include the destination in the folders to be backed up by BackupPC.


MySQL-ZRM scheduler configuration if integrated with BackupPC

Edit your /etc/cron.d/mysql-zrm like this:

0 3 * * * root /usr/bin/mysql-zrm –action purge

As you see the there’s one entry that is missing, the command execution will be triggered by BackupPC.


BackupPC triggering MySQL-ZRM configuration

I’ll assume you already have your BackupPC server configured and that the destination folder is already in the path to be backed up.

  1. Login to BackupPC web interface
  2. Select the server that holds the DBs to be backedup
  3. Choose “Edit config”
  4. Choose “Backup Settings” tab (default)
  5. Bellow “User Commands” there is a text box with the name “DumpPreUserCmd” where you’ll insert:

mysql-zrm-backup -backup-set `hostname -s` –backup-level 0


Setting up MySQL backups is not a hard task, there are a multitude of options out there this is just one of them. I would recommend you guys to have a deep look at the official BackupPC and MySQL-ZRM documentation. This post touches just the surface of what those two pieces of software can do.

As important as doing backups is a good test on recovering the data to the desired state, it’s not enough to be able to list the backup content, you should be able to restore the full service, then you must check if you are able to do it from a full backup, then do it based on and differential or incremental backup. It’s also important to know what are those and be “fluent” with the backup software. This may be the difference between a headache and getting your head cut.

Keep calm and keep your backups up to date!

Pedro M. S. Oliveira

Swap space increase on a running Linux server


If you see that your server is running out of swap you should add more RAM, nevertheless this is not always possible or maybe you need that extra amount for a very specific usage.

If this is the case you just need to add some more swap to your system. There are several usage cases I’ll just cover the 2 most common ones, with and without LVM.


Adding swap space without LVM

If you’re not using LVM and you don’t have any other location to put your new swap partition you can do it in one of the file systems available in the system.

  • Create a file that can be use as swap, if you have more than one file system available choose the one with best performance, in this case we will use /, the file will have 16GB and it will be called extra_swap.fs.

dd if=/dev/zero of=/extra_swap.fs count=16000 bs=1048576

  • Format the file

mkswap /extra_swap.fs

  • Set the right permissions the file

chmod 600 /extra_swap.fs; chown root:root /extra_swap.fs

  • Enable it

swapon /extra_swap.fs

  • Make it permanent (if needed)

echo “/extra_swap swap swap defaults 0 0 ” >> /etc/fstab


Adding swap space with LVM (method 1)

This method applies if you have LVM and you’re not able to disable swap (for instance production servers that have a high system load and memory usage)

  • Add a new volume with 16G (on volume called VolumeGroupName, you will need to adjust this to the desired volume group)

lvcreate -n extra_swap_lv -L16G VolumeGroupName

  • Format the volume

mkswap /dev/VolumeGroupName/extra_swap_lv

  • Enable it

swapon  /dev/VolumeGroupName/extra_swap_lv

  • Make it permanent (if needed)

echo ” /dev/VolumeGroupName/extra_swap_lv swap swap defaults 0 0 ” >> /etc/fstab


Adding swap space with LVM (method2)

If you are able to disable swap for a while (<10minutes) this is the recommended method.

  • Disable  your current swap volume (please take in consideration that this can have a negative impact on performance, use with caution).

swapoff swap_volume_name

  • Expand your current volume, by adding 16GB to the volume swap_volume_name ( you will need to adjust this to the desired logical volume)

lvextend -L+16G swap_volume_name

  • Format the volume

mkswap /dev/VolumeGroupName/swap_volume_name

  • Enable it

swapon  /dev/VolumeGroupName/swap_volume_name



Check the excellent REHL manual about swap.


I hope you don’t have to go through this, as said before the best is to buy some new ram.


Pedro M. S. Oliveira


megacli basic usage

LSI Sandforce raid - megacli managed


When you have large deployments with thousands of SSDs and spinning disks megacli utility provides a great help by having all the features and options available in a way that can be easily scripted and therefore automated.
This usage listing just has the most used and therefore limited set of options, but many more exist, please check the bottom for references.

Here are a few of those I find more useful, this will only apply to LSI Raid controllers.



This will list all the physical devices on adapter 0, if you have more than 1 controller in your server you may use -aAll.


megacli -PDList -a0


The same as above but it will be easy to find if you have 1 slot with errors.


megacli -PDList -a0 | grep “Slot\|Error”


This will display all the settings for the logical device 1 on controller 0.


megacli -LDInfo -L1 -a0


It will display the Consistency of the logical device 1 on controller 0.


megacli -LDGetProp Consistency -L1 -a0


Show rebuild progress on logical device 2 on adapter 0.


megacli -LDRecon ShowProg L2 -a0


List the cache status on adapter 0.


megacli -GetPreservedCacheList -a0


Display auto rebuild state on adapter 0.


megacli -AdpAutoRbld -Dsply -a0


Display missing physical devices on controller 0.


megacli -PdGetMissing -a0


Create a file called megacli_events_since_reboot that will contain all the events logged by all the controllers, this will include warnings, info messages and errors since last reboot.


megacli -AdpEventLog -GetSinceReboot -f megacli_events_since_reboot -aALL


Create a file called megacli_events_since_shutdown that will contain all the events logged by all the controllers, this will include warnings, info messages and errors since last shutdown.


megacli -AdpEventLog -GetSinceShutdown -f megacli_events_since_shutdown -aALL


Show the rebuild progress for the drive in slot 21, enclosure 32, all adapters (you can use -a0 if you just have one adapter).


megacli -pdrbld -showprog -physdrv[32:21] -aALL



Add/alter settings:

Set rebuild rate to 60% (1-100), this will mean that rebuild has higher priority than SO calls.


megacli -AdpSetProp RebuildRate 60 -a0


Discard the preserved cache for all logical devices on adapter 0.


megacli -DiscardPreservedCache -Lall -a0


Turn off device on slot 2 on enclosure 32, adapter 0.


megacli -PDOffline -PhysDrv [32:2]  -a0


Turn on device on slot 2, enclosure 32, adapter 0.


megacli -PDOnline -PhysDrv [32:2] -a0


Flag the the device on slot 2, enclosure 32, adapter 0 as good to be used.

megacli -PDMakeGood -PhysDrv[32:2] -a0


Mark as missing drive on slote 2, enclosure 32 adapter 0


MegaCli -PDMarkMissing -PhysDrv [32:2] -a0


Prepare for removal drive on slote 2, enclosure 32 adapter 0


MegaCli -PdPrpRmv -PhysDrv [32:2] -a0


Replace the missing physical drive on slot 2, enclosure 32 on array 2, row 2, adapter 0.

You may find more information on the missing drive with the option -PdGetMissing (explained above).


megacli -PdReplaceMissing -physdrv [32:2] -Array=2 -row=2 -a0


Initial rebuild on slot 2, enclosure 32, adapter 0.


megacli -PDRbld -Start -PhysDrv[32:2] -a0


Create a logical device with raid 0 on physical device on slot 5, enclosure 32, adapter 0.


megacli -CfgLdAdd -r0[32:5] -a0


Set a dedicated hotspare device on logical device 1 (here it’s called array) using device on slot 18, on enclosure 32, adapter 0.


megacli -PDHSP -Set -Dedicated -Array1 -PhysDrv[32:18] -a0


Remove hotspare located on slot 6, enclosure 32, adapter 0.


megacli -PDHSP -Rmv -PhysDrv[32:6] -a0


Make drive on slot 18, enclosure 32, adapter 0 offline.


megacli -PDOffline -PhysDrv [32:18] -a0


Add 8 drives to an existing raid 6, in this case we are adding it to the logical volume 2 on adapter 0.


megacli -LDRecon start r6 -Add-PhysDrv[32:14,32:15,32:16,32:17,32:18,32:19,32:20,32:21] L2 -a0


Drive firmware update procedure:

* firmware upgrade can brick your device, first make your drive offline, this is MANDATORY if the drive is online

Make drive on slot 18, enclosure 32, adapter 0 offline


megacli -PDOffline -PhysDrv [1:18] -a0


Update firmware on drive at slot 18, enclosure 32 adapter 0, with binary file called fw.binary


megacli -PdFwDownload -PhysDrv[32:18] -f fw.bin -a0


Put drive back online.


megacli -PDOnline -PhysDrv [32:18] -a0


OS rescan logical device 

After adding/removing/editing the logical volume, please note that you need to do this to all block devices you changed, in this case /dev/sda

echo 1 > /sys/block/sdx/device/rescan


More information:



Megacli official manual (PDF)



Pedro M. S. Oliveira

CentOS 7 – How to setup your encrypted filesystem in less than 15 minutes


Nowadays setting up an encrypted  file system is something that can be achieved in a matter of minutes, there’s a small drop in FS performance but it’s barely noticeable and the benefits are countless.

All the major distributions allow you to conveniently setup the encrypted volume during the installation and that is very convenient your for you laptop/desktop, nevertheless on the server-side these options are often neglected.

With this how to you’ll be able to set up your encrypted LVM volume in your CentOS 7 in 8 easy steps and less than 15 minutes.

I’m assuming that you’re running LVM already, and that you have some free space available on your volume group (in this case 249G):


The steps:


lvcreate -L249G -n EncryptedStorage storage


skip the shred command if you just have 15 minutes, look at the explanation bellow to see if you’re willing to do so.


shred -v –iterations=1 /dev/storage/EncryptedStorage

cryptsetup –verify-passphrase –cipher aes-cbc-essiv:sha256 –key-size 256 luksFormat /dev/storage/EncryptedStorage

cryptsetup luksOpen /dev/storage/EncryptedStorage enc_encrypted_storage

mkfs.ext4 /dev/mapper/enc_encrypted_storage


Edit /etc/cryptotab and add the following entry:


enc_encrypted_storage /dev/storage/EncryptedStorage none noauto


Edit /etc/fstab and add the following entry:


/dev/mapper/enc_encrypted_storage /encrypted_storage ext4 noauto,defaults 1 2


Finally mount your encrypted volume


mount /encrypted_storage



After reboot you’ll need to run these two commands to have your encrypted filesystem available on your CentOS 7 system:


cryptsetup luksOpen /dev/storage/EncryptedStorage enc_encrypted_storage

mount /encrypted_storage



Now the steps explained.

Step 1:


lvcreate -L249G -n EncryptedStorage storage

I’ve created a volume with 249GB named EncryptedStorage on my volume group storage (each distribution has a naming convention for the volume group name, so you better check yours, just type:



The output:

— Volume group —
VG Name storage
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 499.97 GiB
PE Size 32.00 MiB
Total PE 15999
Alloc PE / Size 15968 / 499.00 GiB
Free PE / Size 31 / 992.00 MiB
VG UUID tpiJO0-OR9M-fdbx-vTil-2dty-c7PF-xxxxxx

— Volume group —
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 23.51 GiB
PE Size 4.00 MiB
Total PE 6018
Alloc PE / Size 6018 / 23.51 GiB
Free PE / Size 0 / 0
VG UUID sncB8Z-0Upw-VrwH-DOPJ-hELz-377f-yyyyy

As you can see I have 2 volume groups, one installed by default on all VMs and it’s called centos, and another one installed by me called storage, in the how to I’m using the storage volume group.

Step 2:


shred -v –iterations=1 /dev/storage/EncryptedStorage

This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to make sure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data.

You may omit this step although not recommended.

Step 3:


cryptsetup –verify-passphrase –cipher aes-cbc-essiv:sha256 –key-size 256 luksFormat /dev/storage/EncryptedStorage

On this step we format the volume with our selected block cypher, in this case I’m using AES encryption with CBC mode, essiv IV and 256 bits key.

A block cipher is a deterministic algorithm that operates on data blocks and allows encryption and decryption of bulk data. The block cipher mode describes a way the block cipher is repeatedly applied on bulk data to encrypt or decrypt the data securely. An initial vector is a block of data used for ciphertext randomization. IV ensures that repeated encryption of the same plain text provides different ciphertext output. IV must not be reused with the same encryption key. For ciphers in CBC mode, IV must be unpredictable, otherwise the system could become vulnerable to certain watermark attacks (and this is the reason for the sha256).


Step 4:


cryptsetup luksOpen /dev/storage/EncryptedStorage enc_encrypted_storage

Here we assign and open the encrypted volume to a device that will mapped using device mapper, after this step you will be able to do regular block device operations like on any other lvm volume.


Step 5:


mkfs.ext4 /dev/mapper/enc_encrypted_storage

Format the volume with the default ext4 settings, you may use whatever flags you wish though.


Step 6:

Edit /etc/crypttab and the following line:


enc_encrypted_storage /dev/storage/EncryptedStorage none noauto

With this line we will permanently enable  /dev/storage/EncryptedStorage volume assignment to the enc_encrypted_storage mapped device.

The noauto setting is important to the server boot correctly if the blockdevice password is not entered during the boot process, this will enable you to use your custom script or manually insert the password in a later stage using ssh.


Step 7:

Edit /etc/fstab and add the following entry:


/dev/mapper/enc_encrypted_storage /encrypted_storage ext4 noauto,defaults 1 2

This is where we map the previously mapped device to a mount point, in this case /encrypted_storage, the noauto value is set due to the same reasons as in step 5.


Step 8


mount /encrypted_storage

Simple mount command, you’ll be able to store and access your files in /encrypted_storage, it will be a good place for the files you want to keep private on your CentOS system.

You may find more information about supported cyphers and options on Redhat documentation:



Pedro Oliveira

First impression on CentOS 7

After testing the new CentOS 7.0 here are my first impressions:centos7

  1. systemctl took some time to get into (learning done on my SuSE laptop distro)
  2. I  don’t really like the new FW config although I see some advantage there, if you already know iptables good enough not much to gain.
  3. The default CentOS 7 installer is very good, kickstart also works as a charm.
  4. I would love to see a kernel 3.13.xxx instead of the 3.10.xxx though (need to read what has been back-ported).
  5. The new default XFS file-system on CentOS 7 surprised me, and for the standard VMs install I’ve come back to ext4 with some custom options, you need to take attention to this specially if you are deploying small VMs. Big filesystems with more than 2 cores will benefit of using it.
  6. The network CentOS 7 configuration manager also changed and it’s better, nice to be able to fully use network-manager (nm), also the command line interface is really nice (nmtui).
  7. The boot loader is now grub2 instead of grub
  8. Also great the EPEL repo for CentOS 7 with all the goodies we are used to, you may install it just by running:


rpm -Uvh http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm


I still think Redhat and it’s army of clones is ahead of all the major distribution to carry mission critical and deliver the best performance out of your boxes.

Great work CentOS team, and thank you Redhat!

PS – I’m also proud of Redhat 6, a few years ago I’ve installed an email cluster consisting of 2 servers and SAN storage this email cluster served 3000+ IMAP accounts(10GB quota, maildir format), the nice thing is since it was installed with Redhat 6.0 (one month after release aprox.) and it’s still running without reboots (4 years). This is impressive but now I hope my ex-work make update it to Redhat 7, although Redhat 6 support will still be available for until 2020.

Click to access the login or register cheese