Skip to content

metrics

Metrics on your custom shell scripts

I was backing up my arch desktop and I wanted to get a way to visualize whenever my backups are succeeding / failing, so the stack that I will be using for that:

  1. Pushgateway, Prometheus and Grafana running on Kubernetes
  2. Linux host which I want to make backups of

How I am going to do this

I have a NFS Server which is backed by a 3 node replicated GlusterFS cluster which is mounted on my Arch Desktop on /mnt, so the idea is that every 6 hours I would like to execute a bash script that does a backup of directories and using rsync to back it up to my /mnt directory which will back it up to NFS.

Once the backup has completed, we send a metric to Pushgateway and then using Grafana to visualize it.

Arch Desktop

I have installed cronie and rsync and my mounts look like this:

df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
10.18.0.223:/   20G  7.1G   13G  36% /mnt

Then in my scripts directory I have a backup script:

#!/bin/bash

if [[ "$(cat /mnt/status.lock)" == "ok" ]]
then
  # Good to backup
  echo "lock returned ok, ready to backup"

  # Source directories
  SRC1=~/workspace
  SRC2=~/scripts
  SRC3=~/Documents

  # Destination directory on NFS server
  DEST=/mnt/arch-desktop

  # Rsync options
  OPTIONS="-av --delete"

  # Rsync command to sync directories
  rsync $OPTIONS $SRC1 $DEST
  rsync $OPTIONS $SRC2 $DEST
  rsync $OPTIONS $SRC3 $DEST

  # Send metric to pushgateway
  echo "backups_completed 1" | curl --silent --data-binary @- "http://pushgateway.sektorlab.tech/metrics/job/pushgateway-exporter/node/arch-desktop"

else
  # Backup failed
  # Send metric to pushgateway
  echo "backups_completed 0" | curl --silent --data-binary @- "http://pushgateway.sektorlab.tech/metrics/job/pushgateway-exporter/node/arch-desktop"

fi

First I would like to verify that my mount is mounted, where I stored a file status.lock with the content ok, so first I am reading that file from my NFS server, once I can confirm it is present, I continue to backup my directories and once completed I send a request to pushgateway.

You can see that if it fails to read the status file, we send a metric value of 0 to pushgateway, then end metric will look like this on prometheus:

{
  __name__="backups_completed", 
  container="pushgateway", 
  endpoint="http", 
  job="pushgateway-exporter", 
  namespace="monitoring", 
  node="arch-desktop", 
  pod="pushgateway-5c58fc86ff-4g2ck", 
  service="pushgateway"
}

Visualizing on Grafana

On Grafana we can then use the prometheus datasource to query for our backups using something like:

sum(backups_completed) by (node)

Which looks like this:

image

Triggering the script

To trigger the script every 6 hours, we can use cron, to add a new entry:

crontab -e

Then add the entry:

* */6 * * * ~/scripts/backups.sh

And ensure the script has executable permissions:

sudo chmod +x ~/scripts/backups.sh

Now your backups should run every 6 hours.