Backup Zarafa with Bacula

Mon 05 March 2012

Last week I finished migrating our mail/collaboration platform to Zarafa, and as with all things this needs to be backed up. We're running the Zarafa Enterprise edition which come's with a backup tool called zarafa-backup which works like this :

The first time you run the zarafa-backup tool it creates a data file and an index file refering to the items (folders and mails) inside the data file.

The next time you run zarafa-backup it detects the existing files and creates an incremental data file and updates the corresponding index file. It keeps doing this until you delete the data files and index file. Then it wil create a new full backup and the cycle will start all over.

We are using Bacula to do our backups so I needed to work something out.

As stated earlier, zarafa-backup just keeps on creating incrementals which means that if you keep this running a restore will involve restoring a lot of incrementals first. This is not something I wanted...

So I made my schedule like this :

  • create a full backup on Friday evening. That way we have the weekend to run the backup.
  • Until the next Friday we let zarafa-backup creating incrementals in the working folder.
  • On the next Friday we move the complete set to an other folder ( I called it weekly) and back it up. If this is successfull we empty the weekly folder again. Then we run zarafa-backup again which creates a new full backup (since the complete set has been moved and the working directory is empty).

Bacula schedule

Two schedules are created, each whith their own storage pool. * One we run on Friday. * One we run all the other days.

schedule {
        Name = "zarafa-dly"
    Run = Level=full pool=ZDLY-POOL sat-thu at 19:00
}   
schedule { 
    Name = "zarafa-wkly"
    Run = Level=full pool=ZWKLY-POOL fri at 19:00
}

Bacula Zarafa client

The client config has 2 jobs defined. * One that does the daily backups using the "zarafa-dly" schedule. * One that does the backups of the weekly sets using the "zarafa-wkly" schedule. Each job runs a script before the backup run. The second job that backups the weekly sets also has a script that runs after the backup has been made. This script empties the weekly folder.

Job {
        Name ="MAIL02-DLY"
        FileSet="ZARAFA-STORES"
        Client = mail-02
        Storage = TapeRobot
        Write Bootstrap = "/var/lib/bacula/%c.bsr"
        Messages = Standard
        Schedule = "zarafa-dly"
        Type = Backup
        Pool = ZDLY-POOL
        ClientRunBeforeJob = "/etc/bacula/zbackup.sh"
        Run After Job = "/scripts/bacula2nagios \"%n\" 0 \"%e %l %v\""
        Run After Failed Job = "/scripts/bacula2nagios \"%n\" 1 \"%e %l %v\""
}

job {
        Name ="MAIL02-WKLY"
        FileSet="ZARAFA-WEEKLY-STORES"
        Client = mail-02
        Storage = TapeRobot
        Write Bootstrap = "/var/lib/bacula/%c.bsr"
        Messages = Standard
        Schedule = "zarafa-wkly"
        Type = Backup
        Pool = ZWKLY-POOL
        ClientRunBeforeJob = "/etc/bacula/zbackup.sh"
    Client Run After Job = "/etc/bacula/zbackup-cleanup.sh"
    Run After Job = "/scripts/bacula2nagios \"%n\" 0 \"%e %l %v\""
        Run After Failed Job = "/scripts/bacula2nagios \"%n\" 1 \"%e %l %v\""
}

Backup script

#!/bin/bash

#Variables
ZBFOLDER=/zarafa_backup/working
WEEKLYFOLDER=/zarafa_backup/weekly
DRFOLDER=/zarafa_backup/dr
WEEK=`date +%W`

#check if it's Friday or if the folder is empty
if [ `date +%w` -eq 5 -a `ls -A $ZBFOLDER | wc -l` -eq 0 ]; then
    echo "Starting Full backup"
    zarafa-backup -a -o $ZBFOLDER
  elif [ `date +%w` -eq 5 -a `ls -A $ZNBFOLDER | wc -l` -ne 0 ];then
    echo "Copying working to weekly and start new Full backup"
    mkdir -p $WEEKLYFOLDER/week-$WEEK
    cp $ZBFOLDER/* $WEEKLYFOLDER/week-$WEEK
    rm -f $ZBFOLDER/*
    zarafa-backup -a -o $ZBFOLDER
  else
    echo "Starting Incremental backup"
    zarafa-backup -a -o $ZBFOLDER
fi

{% endhighlight %}

### cleanup script
{% highlight bash %}
#!/bin/bash
#cleanup the weekly folder after bacula has run
WEEKLYFOLDER=/zarafa_backup/weekly

rm -rf $WEEKLYFOLDER/*

Detect MTU size when using Jumbo Frames

Wed 22 June 2011

Recently I've setup an iSCSI target based on RHEL6 + tgt. After adding Logical Volumes to a target in the tgtd config file, the iSCSI target was discoverable and ready for use.

After testing this setup for a few days I wanted to tune the network traffic by enabeling Jumbo Frames. If you search on the interwebz you'll most likely find information about adding "MTU=9000" ( for RHEL based clones) to the config file of the network interface.

The problem with Jumbo Frames is that when setting the mtu to high, you get fragmentation. Changing your mtu to 9000 will probably lead to fragmentation. If you don't know this it can be quite hard to troubleshoot because you can still use ssh, ping the target etc.. but the iSCSI targets will keep failing.

You can easily check this with good old ping. Running this:

ping -M do -s 9000 <target_ip>

  • -M : MTU discovery strategy. "do" means "prohibit fragmentation"
  • -s : here you can specify the packet size

Gave me the following result :

From 10.0.0.13 icmp_seq=1 Frag needed and DF set (mtu = 9000)

Lower the packet size until you get a normal ping reply. This is the value you can use as your mtu size in your network card's config file.

ping -M do -s 8900 <target_ip>

RHEV setup

Mon 20 June 2011

This blog post comes a little late because I did this RHEV setup at our company more than 6 months ago and it has been living in the drafts folder for some time now. Now with RHEV 3.0 Beta released I tought it's time to publish this.

About a year and a half ago we started looking at alternatives for our VMWare ESXi setup because we wanted to add hypervisor nodes to our 2 existing nodes running VMWare ESXi. We also wanted the ability to live migrate vm's between the nodes. At the same time Red Hat released RHEV 2.1 and being a Red Hat partner we decided to evaulate it.

We extended our existing setup with 2 Supermicro servers and a Supermicro SATA disk based SAN box configured as an iSCSI target providing around 8TB of usable storage.

Migration

To migrate our existing VM's running on VMWare we used the virt-v2v tool that converts and moves VMWare machines to RHEV. This procedure can be scripted so you can define a set of VM's you want to migrate in one go. Unfortunate these VM's need to be powerd down. I noticed that if your vmdk folders/files are scattered around on you storage including differend folder names, the virt-v2v tool in some cases bails out. In our case I could understand why the tool refused to migrate some machines (it was quite a mess).

Hypervisors

You have 2 options to install the hypervisor nodes :

  • RHEV-H : stripped RHEL with a 100MB foorprint that provides enough to function as a hypervisor node.
  • RHEL : a default RHEL install you can configure yourself.

We created a custom profile on our Kickstart server so we could easily deploy hypervisors nodes based on a standard RHEL. By using a standard RHEL you can install additional packages later on which is not the case with a RHEV-H based install.

Once installed you can then add this node from within the manager interface to your cluster. Once added it will automatically install the necessary packages and becomes active in the cluster.

Storage

After adding hypervisor nodes you need to create "Storage Domains" based on either NFS, FC or iSCSI. Besides Storage Domains you also need to define an ISO domain to stock your installation images. If you want to migrate VM's from VMWare or other RHEV clusters you need to create an Export Domain.

In each cluster one hypervisor node automatically gets the SPM (Storage Pool Manager) role defined. This host keeps track of where storage is assigned to. As soon as this host is put in maintenance or becomes unavailable another host in the cluster will take over the SPM role.

VM's can use Preallocated disks (RAW) or Thin Provisioning (QCOW). For best performance Preallocated is recommended.

conclusion

We have been running this setup for more than a year now and haven't had any real issues with it. We actually filed 2 support cases which have been resolved in newer releases of RHEV. At the moment we run around 100 VM's and although I haven't run any benchmarks yet, I see no real difference with our VMWare setup using FC storage. Although the product still has some drawbacks I believe it has a solid base to build on and already has some nice features like Live Migration, Load Balancing, Thin provisioning,..

Cons

  • RHEV-M (manager) runs on Windows
  • RHEV-M can only be accessed via IE (will probably change in 3.1)
  • Storage part is quite confusing at first.
  • API only accesible via Powershell
  • no live snapshots

In a few weeks I'll probably start testing RHEV 3.0 which now runs on Linux on JBOSS. This makes me think if JBOSS clustering will work to get RHEV-M working in a HA setup.

Switched to Jekyll

Sat 11 June 2011

It has been a while since I last blogged about a "decent" topic and actually it's been a while blogging about anything. The reason is the lack of time and also some lazyness. But that should change now, and the first step I took was migrating my blog from Drupal to a Jekyll generated website. Not that Drupal is bad or anything, but it's quite overkill and somehow felt not really productive while creating content.

So how did I end up with Jekyll?

Because I like using plain text files for writing (I use Latex quite a lot) I started looking for a blogging tool that used plain text files to store it's content instead of a database. PyBloxsom, Blosxom came to mind, but then Jekyll popped up in one of my search results and immediatly liked it because it generates static content you can upload to any webserver. No more php, python, perl, Mysql or updating needed. However, you do need Ruby on the machine that does the generation.. One "drawback" of a static website is commenting and for a moment I was planning on dropping comments on my blog but went for Disqus which I actually quite like.

Now I have my blog stored in a git repository that rsyncs the static content to my webserver when I push my changes. As simple as that.

I really like the thought of using Markdown and vim to write my blogposts from now on (and of course the geeky factor of all this). The only thing left is improving the layout and sanatizing the setup a bit more

I'll be at LOAD (Linux Open Administrator Days)

Wed 13 April 2011

LOADays

Getting DropBox to work with SELinux

Sun 21 November 2010

Recently Serge mentioned DropBox to me, and I remembered creating an account once but haven't used or installed it in the last 2 years.

These days you also get lot more free space with your DropBox so I decided to start using it again.

So I started installing DropBox using the rpm from their website, but got an SELinux warning. Setroubleshootd perfectly explains what's going on and the solution is trivial.

[root@localhost ~]# semanage fcontext -a -t execmem_exec_t '/home/vincent/.dropbox-dist/dropbox' [root@localhost ~]# restorecon -vvF '/home/vincent/.dropbox-dist/dropbox' restorecon reset /home/vincent/.dropbox-dist/dropbox context unconfined_u:object_r:user_home_t:s0->system_u:object_r:execmem_exec_t:s0

RHCE

Fri 29 October 2010

So today I went for the second time to sit the RHCE exam. This time the results were better then earlier.

RHCT components score: 100.0 RHCE components score: 100.0

RHCE certificate number : 805010290454578

The instructor mentioned that this was probably one of the last exams based on RHEL5.

Anyways, I'm glad I made it this time...

Fedora 14 Release party

Thu 21 October 2010

The date for the Belgium Fedora Release Party has been set. A bigger (as in "print this and hang it up in your office") file has been attached.

Fedora 14 Release party poster

Large image

LOAD dinner

Wed 11 August 2010

Yesterday evening we had a dinner with most of the LOAD organizers to catch up and have a nice get together. On the other hand we wanted to discuss some things regarding LOAD.

One of them was if we all wanted to organise a second edition of LOAD, and i can already tell you there'll be a second edition of LOAD. For now that's the only thing that's certain. Date, location, talks, ... are still undecided although the location will probably be the same.

We will soon archive the current website and start posting updates regarding the next edition.

We hope to see you all at the next edition of LOAD.

Fedora 13 Release Party @ hackerspace Ghent

Sun 30 May 2010

This time the Fedora 13 Release Party took place in the Hackerspace in Ghent, called WhiteSpace. As i arrived in the street where the Hackerspace is located i noticed someone who was also at the previous Release party.

A few minutes later biertie arrived with a big Fedora banner and signs to hang up so people would find their way to the HackerSpace (quite handy since the venue was like a small labyrinth).

Next thing was putting the PXE boot server i prepared in place so people could install Fedora 13 on their machine. After PXE booting some laptops to see if it stil worked we were good to go. Bert also created USB sticks with Fedora for some people.

The last day of Puppet Camp Europe was also taking place in Ghent and a lot of people came over to the Fedora Release Party and the HackerSpace became quite crowded.

bert ordered pizza's with the Fedora budget he had so we wouldn't starve. Drinks were provided by the HackerSpace for for very reasonable prices. Club Mate anyone?

After the food bert gave a quick presentation about the new stuff in Fedora 13. Dag Wieers also showed up and was aked to give a lightening talk about dstat . In the end his talk lasted more than one hour. He showed us a nice demo of Dstat's features and talked with real passion about it, so thanks for your talk Dag!

After all this it was time for some chit chat....

Thanks everyone for being there and see you all at the next Release Party!

Ow yeah, thanks Kris for bringing me a Puppet Camp T-Shirt! I would also like to thank the people from HackerSpace Ghent for using their infrastructure to host the event. If you're a geek living near Ghent, join them!

I've seen people take pictures, so if you read this put links to them in the comments please...thanks.

Page 1 / 4 »