Introduction









It doesn't exist a best tecnique or policy about how to backup your data in Linux. It depends how are your data and if you want backup them locally or in remote.



There are some mature tools for specific tasks and you have to decide with one suits better. There are some GUIs that simplify the job, but in case of issues, the terminal is the ONLY way to debug what is happening .



Entire partition

You can copy the entire partition in raw mode with dd:



sudo dd if=/dev/sda1 of=/destination/file.raw

GUI: Partimage



Notes: the output file size will be about the partition size, even if it is almost empty.



Entire system

tar -cvpzf backup.tar.gz --exclude=/other-directory /my-directory

GUI: File Roller as root and save with .tar extension



Notes: You have to keep the permissions if you don't want problems. You will not copy the bootloader in this way.



Source code

The best way to backup your code and keep the modification is to use a Distribuited Concurrent version system:







Git (in italian)







GUI: Giggle or others (but a full-featured gui is still missing)



Notes: There are other DVCS, Bazaar that is more integrated with Ubuntu, Mercurial, Darcs,.. but it seems that Git is becaming the standard.



Big binary files

..like images .iso or other blob data. diff is for plain text differencies, while xdelta is for binary differencies.



xdelta3 -e -s old_file new_file delta_file

Personal data (locale)

rsync + local directory



Personal data (over LAN)

client: rsync + fuse + curlftpfs



server: ftp server







GUI: rsync (Grsync), ftp server (Gproftpd)



Notes: the best way to do an incremental backup is to start, in the server machine, with a .tar of the directory you want backup.



You can use also rsync over ssh, samba or other network protocol but ftp is the fastest one.







Any software you need is already on the Debian/Ubuntu repository.

