Skip to content


Metrics on your custom shell scripts

I was backing up my arch desktop and I wanted to get a way to visualize whenever my backups are succeeding / failing, so the stack that I will be using for that:

  1. Pushgateway, Prometheus and Grafana running on Kubernetes
  2. Linux host which I want to make backups of

How I am going to do this

I have a NFS Server which is backed by a 3 node replicated GlusterFS cluster which is mounted on my Arch Desktop on /mnt, so the idea is that every 6 hours I would like to execute a bash script that does a backup of directories and using rsync to back it up to my /mnt directory which will back it up to NFS.

Once the backup has completed, we send a metric to Pushgateway and then using Grafana to visualize it.

Arch Desktop

I have installed cronie and rsync and my mounts look like this:

df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on   20G  7.1G   13G  36% /mnt

Then in my scripts directory I have a backup script:


if [[ "$(cat /mnt/status.lock)" == "ok" ]]
  # Good to backup
  echo "lock returned ok, ready to backup"

  # Source directories

  # Destination directory on NFS server

  # Rsync options
  OPTIONS="-av --delete"

  # Rsync command to sync directories
  rsync $OPTIONS $SRC1 $DEST
  rsync $OPTIONS $SRC2 $DEST
  rsync $OPTIONS $SRC3 $DEST

  # Send metric to pushgateway
  echo "backups_completed 1" | curl --silent --data-binary @- ""

  # Backup failed
  # Send metric to pushgateway
  echo "backups_completed 0" | curl --silent --data-binary @- ""


First I would like to verify that my mount is mounted, where I stored a file status.lock with the content ok, so first I am reading that file from my NFS server, once I can confirm it is present, I continue to backup my directories and once completed I send a request to pushgateway.

You can see that if it fails to read the status file, we send a metric value of 0 to pushgateway, then end metric will look like this on prometheus:


Visualizing on Grafana

On Grafana we can then use the prometheus datasource to query for our backups using something like:

sum(backups_completed) by (node)

Which looks like this:


Triggering the script

To trigger the script every 6 hours, we can use cron, to add a new entry:

crontab -e

Then add the entry:

* */6 * * * ~/scripts/

And ensure the script has executable permissions:

sudo chmod +x ~/scripts/

Now your backups should run every 6 hours.