# You Are Only as Good as Your Next Resurrection (Part 3)

(NOTE: This is the third of a three part series on setting up a cloud-based backup system, and describes how to set up the backup system to run on schedule automatically. The first describes the rationale behind the various technologies selected for the system, while the second describes how to setup and use the backup system)

## From a Backup to a Backup System

Ok, with all the above we just have the building blocks of a backup system. We need to wrap the backup operation in a script that takes care of pruning older snapshots etc. A full-fledged, really nice solution that includes incorporation into the systemd scheduler can be found here. In this article, we will base the core logic on the above, but the overall approach will be simpler.

## The Backup Script

Our script will be designed to be pretty modular with respect to both the destination repository as well as paths to be backed up. It will take as its primary argument a path to a backup configuration file, which includes the restic-specific information in the form of environmental variables, as above, but also includes variables that define the paths to be backed up and paths to be excluded, “$BACKUP_PATHS” and “$BACKUP_EXCLUDES”, respectively:

export AWS_ACCESS_KEY_ID="your-Wasabi-Access-Key”
export AWS_SECRET_ACCESS_KEY="your-Wasasbi-Secret-Key”
export RESTIC_REPOSITORY="s3:https://s3.wasabisys.com/test-js-2018-06-10"
export BACKUP_PATHS="$HOME/Contents/projects"  The backup script is given in the following Gist: ## Running the Backup Script FIRST: make sure that the repository has been created (initialized), following the instructions given here. If you have not yet actually done this, you would have to use the “init” command on the first backup run: $ ~/.local/bin/bu -c ~/.config/backup/primary-backup.conf.sh init


but otherwise (and subsequently) you would just run:

$~/.local/bin/bu -c ~/.config/backup/primary-backup.conf.sh backup  assuming that you have the script at “~/.local/bin” and the configuration file at “~/.config/backup/primary-backup.conf.sh”. The script provides a number of other options that allow for some useful operations when managing backups. But if invoked with none of them (i.e., just the path to the script followed by the path to the backup configuration file), it executes the default workflow, i.e. backup, forget, and prune: • “Backup” is, of course, taking the latest snapshot. • “Forget” dereferences older snapshots based on a given retention policy. The policy is defined by script variables and is currently set to retain the latest (single) backup snapshot for each day for the last 14 days, the latest backup snapshot for each week for the last 16 weeks, the latest backup snapshot for each month for the last 18 months, and the latest backup snapshot for each year for the last 3 years. This policy is easy enough to modify by editing the variables or simply editing the ‘forget” command directly. • “Prune” actually deletes the “forgotten” or dereferenced snapshots from the repository. ## Automating the Backup You have a number of tools available to you to schedule automatic backups, which I discuss in detail here. ### Using Cron If you are doing hourly backups and are OK with missing some of these hourlies during some hours while your computer is off, down, or suspended, then good old simple and reliable Cron works just fine. Open your crontable for editing using: crontab -e  And add the following line: 0 * * * * ~/.local/bin/bu --no-trim ~/.config/backup/primary-backup.conf.sh 0 4 * * 0 ~/.local/bin/bu --no-backup ~/.config/backup/primary-backup.conf.sh  And that’s all there is to it! With the above set up, we back up once a day at midnight, and once a week, at 4 am on Sunday, we “trim” (our own term for a “forget” and “prune” restic operation, where older backups are removed from the repository). The general workflow makes sense – we want frequent backups, but want to clear older/redundant ones to economize on space according to our specified retention policy. Backups themselves are highly efficient and fast, though the pruning can be time consuming, so while we run backups frequently we only want to prune once a week. This configuration may be good enough for most folks. But one possible glitch is that if your machine is offline, suspended/sleeping, or just switched off during the scheduled event times, the backup/prunes will be missed. If this is going to be issue, then you may want to look into anacron instead, which was designed to solve exactly this problem. Another, far more sophisticated, yet, also, far more complex approach is to use “systemd” … ### Using “systemd” The current tool of choice for timing control for many, and, with good reason, appears to be “[systemd]"(https://hackernoon.com/a-brief-overview-and-history-of-systemd-the-linux-process-manager-ca508bee4a33). It is considerably more complex than using Cron, and there is considerably less information in the form of hand-holding walk-me-through-it tutorials, recipes, guides, etc. on how to set it up, especially as a normal (non-root) user. Here, I present my version of using “systemd” to set up the backups. 1. To schedule our backup jobs to run automatically using the trendy “systemd”, we first need to create two files, or units, as they are known: a service unit and a timer unit. Both will be placed in the directory, “~/.config/systemd/user/". (Note that “user” here is literally “user”: do not substitute your username. Also note that the above location is because we are setting up our backup to run in userspace, i.e., using the “–user” flag, so that we can manager and run this as a normal user. If we were going to do this as root, then the location would be something else, e.g. “/etc/systemd/systemd”.) If we want to call our backup system, for e.g., “primary-backup”, then we would create the following two files: • ~/.config/systemd/user/primary-backup.service • ~/.config/systemd/user/primary-backup.timer 2. Type the following for the service unit, “~/.config/systemd/user/primary-backup.service”: [Unit] Description=Primary backup of this machine. [Service] Type=simple ExecStart=/home/yourusername/.local/bin/bu --no-trim /home/yourusername/.config/backup/primary-backup.conf.sh [Install] WantedBy=default.target  Note that, here, “yourusername” is to be substitute with your actual username. And, more generally and more importantly note that the “ExecStart” command must not contain any variables or require shell expansion: it must be specified as an absolute path. 3. Type the following for the timer unit, “~/.config/systemd/user/primary-backup.timer”: [Unit] Description=Primary backup of this machine. RefuseManualStart=no # Allow manual starts RefuseManualStop=no # Allow manual stops [Timer] # Execute job if it missed a run due to machine being off Persistent=true # Run at 4am, 12pm, 8pm every day OnCalendar=*-*-* 04,12,20:00:00 # File describing job to execute Unit=primary-backup.service [Install] WantedBy=timers.target  Here we specify that the backup is going to run at 4am, 12pm, and 8pm every day. More sophisticated/complex scheduling is possible using the “OnCalender” directive, as discussed in my “systemd” post, and if you are interested in these, you might want to look here or here for more information. Note the “Persistent=true” directive: this states that if backup was missed due to the machine being off or offline, then it should be carried out the next time the machine comes online. Also note that if you want to use the “Persistent=true” option, you will have to use the “OnCalendar"rdirective as the perhaps simpler, “OnUnitActiveSec” or similar directives. 4. Enable the timer and start it: $ systemctl --user enable primary-backup.timer
$systemctl --user start primary-backup.service  5. Check to see if it is scheduled successfully: $ systemctl --user status primary-backup.service
$systemctl --user list-unit-files | grep primary-backup  6. View the logs: $ journalctl --user --unit primary-backup.service
$journalctl --user --unit primary-backup.timer  Or, to see only log messages for the current boot: $ journalctl --user --unit primary-backup.service --boot
$journalctl --user --unit primary-backup.timer --boot  7. If we want to manually run the backup: $ systemctl --user start primary-backup.service

8. Or manually stop it:

$systemctl --user stop primary-backup.service  9. To stop the backup service from running automatically: $ systemctl --user stop primary-backup.timer


and/or then deschedule it from ever running:

$systemctl --user disable primary-backup.timer$ systemctl --user disable primary-backup.service
$systemctl --user daemon-reload$ systemctl --user reset-failed

10. So far, we have only scheduled the backup operation, choose to deliberately not schedule regular prunes and forgets (dereferencing of old snapshots and deletion of unreferenced data blocks) alongside the backups. This is because prunes are time-consuming and expensives operations, and would prefer take the hit in space usage over a short period (say, a week) and then do all the prunes at once to save in time. To schedule the forget and prunes, we need to create a second pair of time and service files:

1. Type the following for the service unit, “~/.config/systemd/user/primary-trim.service”:

[Unit]
Description=Trimming of backups of this machine.

[Service]
Type=simple

[Install]
WantedBy=default.target

2. And the following for the timer unit, “~/.config/systemd/user/primary-trim.timer”:

[Unit]
Description=Trimming of backups of this machine.
RefuseManualStart=no        # Allow manual starts
RefuseManualStop=no         # Allow manual stops

[Timer]
# Execute job if it missed a run due to machine being off
Persistent=true
# Run at 0000 hours on Monday
OnCalendar=Mon *-*-* 00:00:00
# File describing job to execute
Unit=primary-trim.service

[Install]
WantedBy=timers.target


What happens if the prune does not complete before the first scheduled backup at 4 am? No worries. The wonderful design of restic will just gracefully fail the backup until this happens. This does mean that any work you do between midnight and 4 am on Monday will not be backed up, but this data should be picked up in the next backup at noon. More importantly, the data integrity of the backup repository will be maintained.

## More …

• Part 1: what programs/technologies to use, and why.
• Part 2: how to set up the backups and the backup systems.
Share