Skip to content

kubernetes

Keep your Secrets Secure

This will demonstrate how I changed from creating kubernetes secrets directly, to using sealed secrets to create a sealed secret and let the controller create the kuernetes secret for us.

Current Secrets Management Workflow

One thing I don't like from my workflow is to create and update secrets inside Kubernetes. It will be in something like this:

  • Base64 encode a value
  • Create the secret in a yaml file
  • Deploy the secret then remove the yaml file

But when I want to rotate or update the secret:

  • View the kubernetes secret, see if it needs changing
  • Base64 encode the new value
  • Store it into a new yaml file
  • Deploy the new updated secret

I find this workflow tedious, so I switched to sealed-secrets

What is Sealed Secrets

Sealed Secrets enables you to encrypt your secrets using the kubeseal utility which uses asymmetric crypto to encrypt secrets that only the controller can decrypt, which makes it possible to store the sealed value of your secret inside a git public repository as only the controller can decrypt the encrypted string.

So in practice we would generate the secret and pass it to kubeseal and then dump the sealedsecret to stdout which we can then store as a sealedsecret resource, which we can safely store in a public git repository as example.

Install Dependencies

We will need to install kubeseal, and for MacOS using homebrew its:

brew install kubeseal

For other operating systems, please see:

  • https://github.com/bitnami-labs/sealed-secrets?tab=readme-ov-file#kubeseal

Deploy the Sealed Secrets Controller

For ArgoCD inside my apps/kube-system/sealed-secrets/Chart.yaml I have:

---
apiVersion: v2
name: sealed-secrets
description: Sealed Secrets Helm Chart
type: application
version: 0.26.2
dependencies:
- name: sealed-secrets
  version: 2.15.3
  repository: https://bitnami-labs.github.io/sealed-secrets/

And inside my apps/kube-system/sealed-secrets/values.yaml I have:

sealed-secrets:
  fullnameOverride: sealed-secrets-controller
  createController: true
  secretName: "sealed-secrets-key"

  metrics:
    serviceMonitor:
      enabled: true
      namespace: "monitoring"
      labels:
        release: kube-prometheus-stack
    dashboards:
      create: true
      labels:
        grafana_dashboard: "1"
      annotations:
        grafana_folder: "Sektorlab"
      namespace: "monitoring"

Inside the apps/kube-system/sealed-secrets directory I executed:

helm dependency update

And then I pushed the changes up to main and then I saw the sealed-secrets controller pod started in the kube-system namespace.

If you are using Helm to deploy, you can remove the top key of sealed-secrets: from the values and deploy.

How to Create a Sealed Secret

We first need to create a kubernetes secret, then pass it to kubeseal and then save the output to yaml, for example, creating a secret with admin-user: admin:

kubectl create secret generic db-secrets \
  --from-literal=admin-user=admin
  --namespace default --dry-run=client \
  --output yaml | \
  kubeseal --format yaml \
  --namespace default \
  --scope=namespace-wide > sealedsecret.yaml

When we look at sealedsecret.yaml we will see something like:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  annotations:
    sealedsecrets.bitnami.com/namespace-wide: "true"
  name: db-secrets
  namespace: default
spec:
  encryptedData:
    admin-user: AgBY3WYt+Vqyz6jl57gwfPYUUK505waNU2MMCQSLJbwklOh4CXEW8ZSp9Ze1b5QwFgionV4Ch7Al9gKdGvjupYe+n/Og5Il/Nd7FzAzvg69g8xFKMDwy7YCSAJypgUevZ7Ff1WKcyV0T6P0TO1+aUquLuMb1aqFSQHvfWc7xhhbzO+U+/f7t+bFiuHDXUMpR5qzOGCejfFoF26a8BSguY20P1BjqOaW402Y/4sVnK8Zm+rDweq0Ddx19tB09c21hBoau2cGOSz7auRK2Rw+QT9AZW2QlJZD36AHK+mW5gJsA8It1AGbsZAyzAcnWA/PCmOI9KypnWxXHNZrDutb18pwWtsIrMWDbWbg2jUV2Ag7ZjSTcRqOKHQaqevBeJAk3i2RzSdJAALNKODjfCnaho9ijUgUwovgqD+djVDVoF0Hh8YfNzxR94JtVKDA8kns/SQklucoIXwek0lO5O1Yy3sEtvO9NdIn1aTfZqj2qnxdRnldIr+sSMTyF6oa8xQGQhoS0Q1vMWH1Kg/vAcoHdGwTwB3uO3A3w5a63OX8FYRjhEE5D/X4b3UBm0LE32nvBGVKTSODZFF/GPoC04tCr9rGWRxl3hVZOuM+SshCGCc7F/Lr+W+lEkBkcWt7y4YvggPymog/tBx7KFi0at+6W85+jJ9h0/YHVaexa0gSRxKQaZhZnQ8BxEctKVN7NvvR8zj3Gvz/fcw==
  template:
    metadata:
      annotations:
        sealedsecrets.bitnami.com/namespace-wide: "true"
      name: db-secrets
      namespace: default

We can then apply the yaml with:

kubectl apply -f sealedsecret.yaml

We can then view the events from the namespace to see if the secret was successfully unsealed:

kubectl get events -n default
LAST SEEN   TYPE     REASON     OBJECT                    MESSAGE
21s         Normal   Unsealed   sealedsecret/db-secrets   SealedSecret unsealed successfully

And then we can inspect the kubernetes secret which was created from the sealedsecret:

kubectl get secrets/db-secrets -o yaml
apiVersion: v1
kind: Secret
type: Opaque
data:
  admin-user: YWRtaW4=
metadata:
  annotations:
    sealedsecrets.bitnami.com/namespace-wide: "true"
  name: db-secrets
  namespace: default
  ownerReferences:
  - apiVersion: bitnami.com/v1alpha1
    controller: true
    kind: SealedSecret
    name: db-secrets
    uid: a2495f22-b40d-4afb-ae6f-b036474b2b6d

New Workflow

So the new workflow enables me to do the following:

  • Run the kubectl create secret and output the yaml to git
  • Refence the secret it will deploy in my workload
  • Once ArgoCD deploys the sealedsecret, a secret is generated and the workload can pickup the secret

Resources

  • https://github.com/bitnami-labs/sealed-secrets

Backup your Databases

I've had some issues with my persistent volumes on Kubernetes, which I was lucky that I was able to fix it, but thats not always the case. I decided to make regular backups of my mysql databases and since im using Kubernetes, why not use the CronJob resource as well to make backups.

Plan of Action

I wanted to achieve the following:

  1. Use a custom container image that hosts the backup script.
  2. Run a pod on a time schedule that mounts a NFS volume to the pod.
  3. Create a backup with a timestamp suffix and store it to NFS every day.
  4. Run a weekly job that will keep the latest 7 backups on disk.

The solution that I came up with is to use Kubernetes CronJob's that will run on the timeschedules of our choice.

Container Image

First to build our container image, in the file Dockerfile:

FROM alpine:3.19.1

# Install dependencies
RUN apk --no-cache add mysql-client

# Copy binary
COPY bin/db-backup /usr/bin/db-backup
RUN chmod +x /usr/bin/db-backup

# Execute
CMD ["db-backup"]

Then we need to define our backup script in bin/db-backup:

#!/usr/bin/env sh

# MySQL credentials
DB_USER="${DB_USER:-}"
DB_PASS="${DB_PASS:-}"
DB_HOST="${DB_HOST:-}"

# Backup directory
BACKUP_DIR="${BACKUP_DIR:-/data/backups}"
DATE=$(date +"%Y%m%d%H%M")
BACKUP_FILE="$BACKUP_DIR/all_databases_$DATE.sql.gz"

# Function to log and exit on error
log_and_exit() {
  echo "$(date +"%Y-%m-%d %H:%M:%S") - $1"
  echo $1
  exit 1
}

# Check if required environment variables are set
if [ -z "$DB_USER" ] || [ -z "$DB_PASS" ] || [ -z "$DB_HOST" ]; then
    log_and_exit "Error: One or more required environment variables (DB_USER, DB_PASS, DB_HOST) are not set."
fi

# Ensure the backup directory exists
mkdir -p $BACKUP_DIR

# Dump all databases and gzip the output
mysqldump -u $DB_USER -p$DB_PASS -h $DB_HOST --all-databases | gzip > $BACKUP_FILE

# Verify the backup file
if [ -f "$BACKUP_FILE" ]; then
  echo "[$DATE] Backup successful: $BACKUP_FILE"
else
  log_and_exit "Backup failed!"
fi

As you can see we are relying on environment variables that we need to have in our runtime environment.

Then continue to build the container image:

docker build -t backup-image .

Then push it to your registry, I have published mine at ruanbekker/mysql-backups:alpine-latest.

Backup CronJob

Now that we have our container image published, we can define our cronjob that will do backups every morning at 2AM, in templates/mysql-backup-job.yaml:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: mysql-db-backup
  namespace: databases
spec:
  schedule: "* 2 * * *"  # Runs daily at 2:00 AM
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: db-backup
            image: "ruanbekker/mysql-backups:alpine-latest"
            imagePullPolicy: Always
            env:
            - name: DB_HOST
              value: "mysql.databases.svc.cluster.local"
            - name: BACKUP_DIR
              value: "/backup"
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: mysql-db-backup-secrets
                  key: DB_USER
            - name: DB_PASS
              valueFrom:
                secretKeyRef:
                  name: mysql-db-backup-secrets
                  key: DB_PASS
            volumeMounts:
            - name: backup-storage
              mountPath: /backup
          restartPolicy: OnFailure
          volumes:
          - name: backup-storage
            persistentVolumeClaim:
              claimName: mysql-backup-pvc

We can see we are referencing some environment variables using secrets, so let's create those secrets in templates/secrets.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mysql-db-backup-secrets
  namespace: databases
type: Opaque
data:
  DB_USER: YWRtaW4=
  DB_PASS: YWRtaW4=
  # echo -n 'admin' | base64

That includes our DB_USER and DB_PASS, ensure that you have DB_HOST which will be the endpoint of your MySQL host (as you can see mine is the service endpoint inside the cluster), as well as BACKUP_DIR which is the backup directory inside if your pod, this needs to match the volumeMounts section.

Backup Cleanup

Then lastly the job that will clean up the old backups once a week can be defined in templates/mysql-cleanup-job.yaml:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: db-backup-cleanup
  namespace: databases
spec:
  schedule: "0 3 * * 0"  # Runs weekly at 3:00 AM on Sundays
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: db-backup-cleanup
            image: "alpine:3.19.1"
            command:
            - /bin/sh
            - -c
            - |
              find /backup -type f -mtime +7 -name '*.sql.gz' -exec rm {} \;
            volumeMounts:
            - name: backup-storage
              mountPath: /backup
          restartPolicy: OnFailure
          volumes:
          - name: backup-storage
            persistentVolumeClaim:
              claimName: mysql-backup-pvc

Persistent Volume Claim

I have a NFS Storage Class named nfs and we need to create a persistent volume claim and then let both jobs use this pvc as both jobs need to use the data on that storage path. Inside templates/mysql-pvc-for-jobs.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-backup-pvc
  namespace: databases
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  resources:
    requests:
      storage: 5Gi

Deploy

Once we have everything defined, we can deploy them using kubectl apply -f templates/. Just make sure you review the namespaces, storage classes etc.

You can view the resources using:

kubectl get cronjobs -n databases
NAME                SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
db-backup-cleanup   0 3 * * 0   False     0        <none>          47h
mysql-db-backup     0 2 * * *   False     0        15h             2d