Skip to content

homelab

Keep your Secrets Secure

This will demonstrate how I changed from creating kubernetes secrets directly, to using sealed secrets to create a sealed secret and let the controller create the kuernetes secret for us.

Current Secrets Management Workflow

One thing I don't like from my workflow is to create and update secrets inside Kubernetes. It will be in something like this:

  • Base64 encode a value
  • Create the secret in a yaml file
  • Deploy the secret then remove the yaml file

But when I want to rotate or update the secret:

  • View the kubernetes secret, see if it needs changing
  • Base64 encode the new value
  • Store it into a new yaml file
  • Deploy the new updated secret

I find this workflow tedious, so I switched to sealed-secrets

What is Sealed Secrets

Sealed Secrets enables you to encrypt your secrets using the kubeseal utility which uses asymmetric crypto to encrypt secrets that only the controller can decrypt, which makes it possible to store the sealed value of your secret inside a git public repository as only the controller can decrypt the encrypted string.

So in practice we would generate the secret and pass it to kubeseal and then dump the sealedsecret to stdout which we can then store as a sealedsecret resource, which we can safely store in a public git repository as example.

Install Dependencies

We will need to install kubeseal, and for MacOS using homebrew its:

brew install kubeseal

For other operating systems, please see:

  • https://github.com/bitnami-labs/sealed-secrets?tab=readme-ov-file#kubeseal

Deploy the Sealed Secrets Controller

For ArgoCD inside my apps/kube-system/sealed-secrets/Chart.yaml I have:

---
apiVersion: v2
name: sealed-secrets
description: Sealed Secrets Helm Chart
type: application
version: 0.26.2
dependencies:
- name: sealed-secrets
  version: 2.15.3
  repository: https://bitnami-labs.github.io/sealed-secrets/

And inside my apps/kube-system/sealed-secrets/values.yaml I have:

sealed-secrets:
  fullnameOverride: sealed-secrets-controller
  createController: true
  secretName: "sealed-secrets-key"

  metrics:
    serviceMonitor:
      enabled: true
      namespace: "monitoring"
      labels:
        release: kube-prometheus-stack
    dashboards:
      create: true
      labels:
        grafana_dashboard: "1"
      annotations:
        grafana_folder: "Sektorlab"
      namespace: "monitoring"

Inside the apps/kube-system/sealed-secrets directory I executed:

helm dependency update

And then I pushed the changes up to main and then I saw the sealed-secrets controller pod started in the kube-system namespace.

If you are using Helm to deploy, you can remove the top key of sealed-secrets: from the values and deploy.

How to Create a Sealed Secret

We first need to create a kubernetes secret, then pass it to kubeseal and then save the output to yaml, for example, creating a secret with admin-user: admin:

kubectl create secret generic db-secrets \
  --from-literal=admin-user=admin
  --namespace default --dry-run=client \
  --output yaml | \
  kubeseal --format yaml \
  --namespace default \
  --scope=namespace-wide > sealedsecret.yaml

When we look at sealedsecret.yaml we will see something like:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  annotations:
    sealedsecrets.bitnami.com/namespace-wide: "true"
  name: db-secrets
  namespace: default
spec:
  encryptedData:
    admin-user: AgBY3WYt+Vqyz6jl57gwfPYUUK505waNU2MMCQSLJbwklOh4CXEW8ZSp9Ze1b5QwFgionV4Ch7Al9gKdGvjupYe+n/Og5Il/Nd7FzAzvg69g8xFKMDwy7YCSAJypgUevZ7Ff1WKcyV0T6P0TO1+aUquLuMb1aqFSQHvfWc7xhhbzO+U+/f7t+bFiuHDXUMpR5qzOGCejfFoF26a8BSguY20P1BjqOaW402Y/4sVnK8Zm+rDweq0Ddx19tB09c21hBoau2cGOSz7auRK2Rw+QT9AZW2QlJZD36AHK+mW5gJsA8It1AGbsZAyzAcnWA/PCmOI9KypnWxXHNZrDutb18pwWtsIrMWDbWbg2jUV2Ag7ZjSTcRqOKHQaqevBeJAk3i2RzSdJAALNKODjfCnaho9ijUgUwovgqD+djVDVoF0Hh8YfNzxR94JtVKDA8kns/SQklucoIXwek0lO5O1Yy3sEtvO9NdIn1aTfZqj2qnxdRnldIr+sSMTyF6oa8xQGQhoS0Q1vMWH1Kg/vAcoHdGwTwB3uO3A3w5a63OX8FYRjhEE5D/X4b3UBm0LE32nvBGVKTSODZFF/GPoC04tCr9rGWRxl3hVZOuM+SshCGCc7F/Lr+W+lEkBkcWt7y4YvggPymog/tBx7KFi0at+6W85+jJ9h0/YHVaexa0gSRxKQaZhZnQ8BxEctKVN7NvvR8zj3Gvz/fcw==
  template:
    metadata:
      annotations:
        sealedsecrets.bitnami.com/namespace-wide: "true"
      name: db-secrets
      namespace: default

We can then apply the yaml with:

kubectl apply -f sealedsecret.yaml

We can then view the events from the namespace to see if the secret was successfully unsealed:

kubectl get events -n default
LAST SEEN   TYPE     REASON     OBJECT                    MESSAGE
21s         Normal   Unsealed   sealedsecret/db-secrets   SealedSecret unsealed successfully

And then we can inspect the kubernetes secret which was created from the sealedsecret:

kubectl get secrets/db-secrets -o yaml
apiVersion: v1
kind: Secret
type: Opaque
data:
  admin-user: YWRtaW4=
metadata:
  annotations:
    sealedsecrets.bitnami.com/namespace-wide: "true"
  name: db-secrets
  namespace: default
  ownerReferences:
  - apiVersion: bitnami.com/v1alpha1
    controller: true
    kind: SealedSecret
    name: db-secrets
    uid: a2495f22-b40d-4afb-ae6f-b036474b2b6d

New Workflow

So the new workflow enables me to do the following:

  • Run the kubectl create secret and output the yaml to git
  • Refence the secret it will deploy in my workload
  • Once ArgoCD deploys the sealedsecret, a secret is generated and the workload can pickup the secret

Resources

  • https://github.com/bitnami-labs/sealed-secrets

Backup your Databases

I've had some issues with my persistent volumes on Kubernetes, which I was lucky that I was able to fix it, but thats not always the case. I decided to make regular backups of my mysql databases and since im using Kubernetes, why not use the CronJob resource as well to make backups.

Plan of Action

I wanted to achieve the following:

  1. Use a custom container image that hosts the backup script.
  2. Run a pod on a time schedule that mounts a NFS volume to the pod.
  3. Create a backup with a timestamp suffix and store it to NFS every day.
  4. Run a weekly job that will keep the latest 7 backups on disk.

The solution that I came up with is to use Kubernetes CronJob's that will run on the timeschedules of our choice.

Container Image

First to build our container image, in the file Dockerfile:

FROM alpine:3.19.1

# Install dependencies
RUN apk --no-cache add mysql-client

# Copy binary
COPY bin/db-backup /usr/bin/db-backup
RUN chmod +x /usr/bin/db-backup

# Execute
CMD ["db-backup"]

Then we need to define our backup script in bin/db-backup:

#!/usr/bin/env sh

# MySQL credentials
DB_USER="${DB_USER:-}"
DB_PASS="${DB_PASS:-}"
DB_HOST="${DB_HOST:-}"

# Backup directory
BACKUP_DIR="${BACKUP_DIR:-/data/backups}"
DATE=$(date +"%Y%m%d%H%M")
BACKUP_FILE="$BACKUP_DIR/all_databases_$DATE.sql.gz"

# Function to log and exit on error
log_and_exit() {
  echo "$(date +"%Y-%m-%d %H:%M:%S") - $1"
  echo $1
  exit 1
}

# Check if required environment variables are set
if [ -z "$DB_USER" ] || [ -z "$DB_PASS" ] || [ -z "$DB_HOST" ]; then
    log_and_exit "Error: One or more required environment variables (DB_USER, DB_PASS, DB_HOST) are not set."
fi

# Ensure the backup directory exists
mkdir -p $BACKUP_DIR

# Dump all databases and gzip the output
mysqldump -u $DB_USER -p$DB_PASS -h $DB_HOST --all-databases | gzip > $BACKUP_FILE

# Verify the backup file
if [ -f "$BACKUP_FILE" ]; then
  echo "[$DATE] Backup successful: $BACKUP_FILE"
else
  log_and_exit "Backup failed!"
fi

As you can see we are relying on environment variables that we need to have in our runtime environment.

Then continue to build the container image:

docker build -t backup-image .

Then push it to your registry, I have published mine at ruanbekker/mysql-backups:alpine-latest.

Backup CronJob

Now that we have our container image published, we can define our cronjob that will do backups every morning at 2AM, in templates/mysql-backup-job.yaml:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: mysql-db-backup
  namespace: databases
spec:
  schedule: "* 2 * * *"  # Runs daily at 2:00 AM
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: db-backup
            image: "ruanbekker/mysql-backups:alpine-latest"
            imagePullPolicy: Always
            env:
            - name: DB_HOST
              value: "mysql.databases.svc.cluster.local"
            - name: BACKUP_DIR
              value: "/backup"
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: mysql-db-backup-secrets
                  key: DB_USER
            - name: DB_PASS
              valueFrom:
                secretKeyRef:
                  name: mysql-db-backup-secrets
                  key: DB_PASS
            volumeMounts:
            - name: backup-storage
              mountPath: /backup
          restartPolicy: OnFailure
          volumes:
          - name: backup-storage
            persistentVolumeClaim:
              claimName: mysql-backup-pvc

We can see we are referencing some environment variables using secrets, so let's create those secrets in templates/secrets.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mysql-db-backup-secrets
  namespace: databases
type: Opaque
data:
  DB_USER: YWRtaW4=
  DB_PASS: YWRtaW4=
  # echo -n 'admin' | base64

That includes our DB_USER and DB_PASS, ensure that you have DB_HOST which will be the endpoint of your MySQL host (as you can see mine is the service endpoint inside the cluster), as well as BACKUP_DIR which is the backup directory inside if your pod, this needs to match the volumeMounts section.

Backup Cleanup

Then lastly the job that will clean up the old backups once a week can be defined in templates/mysql-cleanup-job.yaml:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: db-backup-cleanup
  namespace: databases
spec:
  schedule: "0 3 * * 0"  # Runs weekly at 3:00 AM on Sundays
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: db-backup-cleanup
            image: "alpine:3.19.1"
            command:
            - /bin/sh
            - -c
            - |
              find /backup -type f -mtime +7 -name '*.sql.gz' -exec rm {} \;
            volumeMounts:
            - name: backup-storage
              mountPath: /backup
          restartPolicy: OnFailure
          volumes:
          - name: backup-storage
            persistentVolumeClaim:
              claimName: mysql-backup-pvc

Persistent Volume Claim

I have a NFS Storage Class named nfs and we need to create a persistent volume claim and then let both jobs use this pvc as both jobs need to use the data on that storage path. Inside templates/mysql-pvc-for-jobs.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-backup-pvc
  namespace: databases
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  resources:
    requests:
      storage: 5Gi

Deploy

Once we have everything defined, we can deploy them using kubectl apply -f templates/. Just make sure you review the namespaces, storage classes etc.

You can view the resources using:

kubectl get cronjobs -n databases
NAME                SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
db-backup-cleanup   0 3 * * 0   False     0        <none>          47h
mysql-db-backup     0 2 * * *   False     0        15h             2d

Self-Hosted Version Control with Gitea

In this post we will be deploying and configuring Gitea, which is an open source version control system.

Prepare Deployment using ArgoCD

We will be deploying Gitea with the following configuration:

  • Path in the gitops repo: apps/gitea/
  • Namespace: gitea
  • Ingress: git.int.sektorlab.tech
  • Persistence: local-path
  • Database: postgresql

First we need to get the chart details from artifacthub: - https://artifacthub.io/packages/helm/gitea/gitea/10.1.4

At the time of writing the latest version of the chart is 10.1.4, so first we need to define our Chart.yaml inside our gitea directory within our gitops repository, at apps/gitea/Chart.yaml:

apiVersion: v2
name: gitea
description: Gitea helm chart
type: application
version: 10.1.4
dependencies:
- name: gitea
  version: 10.1.4
  repository: https://dl.gitea.io/charts

The next part is to configure the values.yaml inside our directory at apps/gitea/values.yaml:

gitea:
  replicaCount: 1
  global:
    storageClass: "local-path"

  replicaCount: 1
  service:
    http:
      type: ClusterIP
      port: 3000
    ssh:
      type: LoadBalancer
      port: 22
      annotations:
        metallb.universe.tf/allow-shared-ip: nginx
        metallb.universe.tf/loadBalancerIPs: 10.8.0.115

  ingress:
    enabled: true
    className: nginx
    hosts:
      - host: git.int.sektorlab.tech
        paths:
          - path: /
            pathType: Prefix
    apiVersion: networking.k8s.io/v1

  persistence:
    enabled: true
    create: true
    mount: true
    size: 10Gi
    accessModes:
      - ReadWriteOnce
    storageClass: local-path
    annotations:
      helm.sh/resource-policy: keep

  gitea:
    admin:
      # existingSecret: gitea-admin-secret
      existingSecret:
      username: gitea_admin
      password: gitea_admin
      email: "gitea@local.domain"

    config:
      server:
        SSH_PORT: 22
        SSH_LISTEN_PORT: 2222
        DOMAIN: git.int.sektorlab.tech
        ROOT_URL: https://git.int.sektorlab.tech
        SSH_DOMAIN: git.int.sektorlab.tech

  redis-cluster:
    enabled: true
    usePassword: false
    cluster:
      nodes: 3
      replicas: 0

  postgresql-ha:
    enabled: true
    global:
      postgresql:
        database: gitea
        password: gitea
        username: gitea
    primary:
      persistence:
        size: 10Gi

Once we have our values defined, we need to download and package our dependencies, so first go to the directory:

cd apps/gitea

Then run a dependency update with helm:

helm dependency update

And push the changes up to the remote branch for our gitops repository so that argocd can deploy.

Deploying with Helm

If you are using helm to deploy the changes, you can add the repository:

helm repo add gitea https://dl.gitea.io/charts

Then save the mentioned values as values.yaml but just remove the top gitea: key and indent everything one step to the left and deploy using:

helm upgrade --install gitea gitea/gitea --version 10.1.4 --values values.yaml

Verify

Verify that gitea has been deployed, which can be done using kubectl:

kubectl get pods -n gitea

Access

Then we can view our ingress using:

kubectl get ingress -n gitea

And then we can access gitea using the username/password that we have set in the values.

Metrics on your custom shell scripts

I was backing up my arch desktop and I wanted to get a way to visualize whenever my backups are succeeding / failing, so the stack that I will be using for that:

  1. Pushgateway, Prometheus and Grafana running on Kubernetes
  2. Linux host which I want to make backups of

How I am going to do this

I have a NFS Server which is backed by a 3 node replicated GlusterFS cluster which is mounted on my Arch Desktop on /mnt, so the idea is that every 6 hours I would like to execute a bash script that does a backup of directories and using rsync to back it up to my /mnt directory which will back it up to NFS.

Once the backup has completed, we send a metric to Pushgateway and then using Grafana to visualize it.

Arch Desktop

I have installed cronie and rsync and my mounts look like this:

df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
10.18.0.223:/   20G  7.1G   13G  36% /mnt

Then in my scripts directory I have a backup script:

#!/bin/bash

if [[ "$(cat /mnt/status.lock)" == "ok" ]]
then
  # Good to backup
  echo "lock returned ok, ready to backup"

  # Source directories
  SRC1=~/workspace
  SRC2=~/scripts
  SRC3=~/Documents

  # Destination directory on NFS server
  DEST=/mnt/arch-desktop

  # Rsync options
  OPTIONS="-av --delete"

  # Rsync command to sync directories
  rsync $OPTIONS $SRC1 $DEST
  rsync $OPTIONS $SRC2 $DEST
  rsync $OPTIONS $SRC3 $DEST

  # Send metric to pushgateway
  echo "backups_completed 1" | curl --silent --data-binary @- "http://pushgateway.sektorlab.tech/metrics/job/pushgateway-exporter/node/arch-desktop"

else
  # Backup failed
  # Send metric to pushgateway
  echo "backups_completed 0" | curl --silent --data-binary @- "http://pushgateway.sektorlab.tech/metrics/job/pushgateway-exporter/node/arch-desktop"

fi

First I would like to verify that my mount is mounted, where I stored a file status.lock with the content ok, so first I am reading that file from my NFS server, once I can confirm it is present, I continue to backup my directories and once completed I send a request to pushgateway.

You can see that if it fails to read the status file, we send a metric value of 0 to pushgateway, then end metric will look like this on prometheus:

{
  __name__="backups_completed", 
  container="pushgateway", 
  endpoint="http", 
  job="pushgateway-exporter", 
  namespace="monitoring", 
  node="arch-desktop", 
  pod="pushgateway-5c58fc86ff-4g2ck", 
  service="pushgateway"
}

Visualizing on Grafana

On Grafana we can then use the prometheus datasource to query for our backups using something like:

sum(backups_completed) by (node)

Which looks like this:

image

Triggering the script

To trigger the script every 6 hours, we can use cron, to add a new entry:

crontab -e

Then add the entry:

* */6 * * * ~/scripts/backups.sh

And ensure the script has executable permissions:

sudo chmod +x ~/scripts/backups.sh

Now your backups should run every 6 hours.

Welcome to the homelab journey

Welcome everyone 👋, this website is intended to document my journey of my homelab.

Who am I?

👋 Hi, I'm Ruan

I am a passionate DevOps Engineer, and love to share my knowledge with the world. I enjoy tinkering with homelabs and self-hosting as I learn by do, and break things to understand how to fix them.

My Tutorial Blogs

I enjoy publishing technical content on "how-to" tutorial based forms on the following blogs:

More about me

You can visit my website if you would like to reach out or see more of my work: