Please note, this gives all the apearance of working then fails after 24 hours or so. It seems there are some funnies with the way / is handled and more investigation is needed.
ESXI has a very limited shell, some things are missing, others arent where they should be. It also seems that the SSH implementation is somehow broken. I’m not sure how so we are left with doing this as a two step backups. The ESXI server makes the backup and a remote host sucks it off (fnarr fnarr)
ESXI is a VERY tightly controlled environment, and resource use is critical. We will be keeping it simple for this procedure.
First up you will need to enable SSH, this is covered here
Start up a vSphere session to this server, select the server itself and click the configuration tab. Click security profile. You can also enable the SSH service here. Now click properties on the far right. In the list find SSH client and check the box. OK this and come out. You should now be able to SSH from the ESXI server.
Now to make our key. You wont find ssh-keygen in the path. We also have no home dir so this is a bit awkward too.
When prompted save the key to /.ssh/id_rsa.pub and enter for no password on both counts.
The command we used for the other backups will work here, it just needs some modification…
ssh <yourserverip> mkdir -p .ssh && cat /.ssh/id_rsa.pub | ssh <yourserverip> ‘cat >> .ssh/authorized_keys’ && ssh <yourserverip> chmod -R 700 .ssh
Test it, you should now have the ability to login with no password to your server. This only sadly works for root 🙁
Now, backups, this bit sucks. There are varying reports of this being possible/not possible without stopping the host. Turns out its entireley possible but you need the space to do it. It may actually be worth having a backup disk as an empty filestore for this. as we are going to have to essentiall clone the VM to do this.
Grab the script from here and get it onto your ESXI machine, you can SCP it over now. You’ll want to copy the rsync binary over now too which you can find here. Chuck it in /bin and make sure you rename it to rsync and chmod it. Same with the script and call it backupvm.sh
We are going to edit the script a little just to make our life easier. This means we have just one script to run and the CRON job can then take the VM name as an argument.
Edit lines 13 and 16 to fit your host, then change line 29:
and line 99:
tar –czpf “$BACKUP_ROOT/$MACHINE-$BACKUP_APPEND.tgz” “$BACKUP_PATH”
tar –czpf “$BACKUP_ROOT/$MACHINE.tgz” “$BACKUP_PATH”
Our backup is no longer a moving target, this makes our job easier. Run it and make sure all is well, you may want to create a small VM to test it. I had a Win98 vme hand and thus
This is undoubtedly something were a faster server will help. Its definately something to run after hours. A G2030 takes about 2 mins to do a 4Gb VM however as another plus the resultant filesize is smaller than the raw VM.ls As an aside, the original script didnt remove its temporary folder, after the changes it now does.
Now, time to test rsync. I have a tgz file here called win98.tgz and I’ve a folder on my remote server of /backups/vm/win98.
rsync -avz -e “ssh” –progress win98.tgz root@<yourserver>:/backups/vm/win98
Should do the trick. Let it run and make sure the file has indeed gone over. Re-running the command should result in rsync coming back without doing an upload, the files are consistant. Edit the backup script again and add the following at line 32
At what is now line 107 under echo “removed temp files.” add
echo ” Starting server backup “
/bin/rsync -avz -e “ssh” –progress $BACKUP_ROOT/$MACHINE.tgz root@$BACKUPSERVER:$REMOTEPATH/$MACHINE/$MACHINE.tgz
if [ “$?” -eq “0” ]
echo ” Sync done, deleting local file “
logger Backup of $MACHINE to $BACKUPSERVER completed sucesfully
logger Backup of $MACHINE to $BACKUPSERVER failed!
echo ” Sync failed! “
echo ” local file NOT deleted”
And test it again….
What we are doing now is using your new parameters and the existing one to build the rsync command line. So we have added to the end of the script another step that actually does the backup. All you need do is make sure that the server has a folder for the backup to drop in. If the sync fails the backup file will be left locally. Watch this as a failed sync could cause your storage to vanish as the local copies build up. you may want to just drop the file anyway.
Time for CRON, this should be easy…guess what? 🙂 You cant edit the crontab root file. Here
is the official method from VMWare but it doesnt work, the file cant be written to no matter what you do, changes are also not persistent anyhow.
To keep things simple we are going to run our backups from a script. Create /backups.sh and pop the following in there
# Backup script
/bin/logger Backups starting…
now add each vm you’ll be backing up, this will make them run sequentially
Save the file and chmod it. test it if you feel the need. We will be calling this from cron. Now there is no need to do it this way, you could create individual cron jobs for each machine, batch them up, whatever you feel. This is justa simple way to sequentially do it.
Its worth at this point, benchmarking your backups. Some larger VMs can take a LONG time to complete and transfer and you want to schedule things so you dont get overlaps. Using the method below with one cron job then a backup file will avoid this but its still worth doing so you can get an idea when to start them so they actually finish out of hours.
Now dealing with cron. The crontab lives in /var/spool/cron/crontabs however there are two issues. Firstly, this file is trashed on every boot. Secondly, its not even a real file, you cant actually edit it. You can copy it, edit it then copy it back though. So kludgey as it is, thats what we will do here. Before we go further you need to know about how to schedule cron jobs. This Link
should help you out. We are going to run this job at 1am every other day from the second day of the month (even days). The schedule we need for cron is 0 1 2-30/2 * * /backups.sh. Create a new script /addbackupjob.sh and pop in
# Script to add the backup job
cp /var/spool/cron/crontabs/root /var/spool/cron/crontabs/root.tmp
cp /var/spool/cron/crontabs/root /var/spool/cron/crontabs/root.bak
echo “0 1 2-30/2 * * /backups.sh” >> root.tmp
cp /var/spool/cron/crontabs/root.tmp /var/spool/cron/crontabs/root
kill $(cat /var/run/crond.pid) && crond
run this and cat /var/spool/cron/crontabs/root and make sure this has done as it should/ it’ll leave a backup file behind just in case. This link
details how to run a script an boot. We need to add /addbackup.sh to /etc/rc.local.d/local.sh. open the file and add it just above the last line, exit 0.
That *should* be it. The cron job will be readded on every reboot.
You can use /bin/logger in the script if you’d like to write to the syslog.
*** UPDATE ***
If you have multiple datastores, this isnt going to work for you. However with some changes it can be made to.
Edit your ./backupvm.sh and make the following changes: