You are viewing documentation for Cozystack next, which is currently in beta. For the latest stable version, see the v1.3 documentation.

Managed Application Backup Configuration

Configure strategies and BackupClasses for logical data backups of managed databases (Postgres, MariaDB, ClickHouse, FoundationDB).

This guide is for cluster administrators who configure backup strategies for Cozystack-managed database applications: Postgres, MariaDB, ClickHouse, and FoundationDB. Once strategies and BackupClass resources are in place, tenants run backups and restores by creating BackupJob, Plan, and RestoreJob resources with no further admin action.

Prerequisites

  • Administrator access to the Cozystack (management) cluster.
  • The backup-controller and backupstrategy-controller components are installed and running.
  • S3-compatible storage reachable from the management cluster — either the in-cluster SeaweedFS provisioned via the Bucket application, or any external S3 endpoint.
  • The corresponding upstream operator is deployed for each application Kind you want to back up: CloudNativePG, mariadb-operator, ClickHouse operator, or fdb-kubernetes-operator. These ship with Cozystack by default.

How a managed-application strategy works

The flow on every BackupJob:

  1. A tenant creates a BackupJob (or a Plan that materialises one on a cron) that references a BackupClass and an apps.cozystack.io/<Kind> application.
  2. The core backup controller resolves the BackupClass and matches the application Kind to a driver-specific strategy.backups.cozystack.io/<Kind> strategy.
  3. The driver renders its strategy template against the live application object (.Application) and the BackupClass parameters (.Parameters), then creates the operator-native backup CR (Backup for mariadb, FoundationDBBackup for FoundationDB, an HTTP call against the in-pod sidecar for ClickHouse, a barman-driven snapshot in cnpg.io for Postgres).
  4. On success the driver creates a Cozystack Backup artefact in the same namespace; RestoreJob resources reference that artefact later.

BackupClass is cluster-scoped: a single instance covers every tenant namespace.

Per-driver setup

The strategies below are written for the in-cluster SeaweedFS Bucket application. If you use external S3 storage, drop the endpointCA / TLS sections and point the endpoint at your provider.

Postgres (CNPG strategy)

The CNPG driver delegates to CloudNativePG’s native barman backup. Each BackupJob is a barman snapshot streamed to S3; RestoreJob recreates the cnpg.io/Cluster from the archive.

Create the strategy:

apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: CNPG
metadata:
  name: postgres-data-cnpg-strategy
spec:
  template:
    serverName: "{{ .Application.metadata.name }}"
    barmanObjectStore:
      destinationPath: "s3://REPLACE_WITH_COSI_BUCKET_NAME/{{ .Application.metadata.name }}/"
      endpointURL: "https://REPLACE_WITH_S3_ENDPOINT"
      retentionPolicy: "30d"
      endpointCA:
        secretRef:
          name: "{{ .Application.metadata.name }}-cnpg-backup-ca"
        key: "ca.crt"
      s3Credentials:
        secretRef:
          name: "{{ .Application.metadata.name }}-cnpg-backup-creds"
      data:
        compression: gzip
      wal:
        compression: gzip

Bind the application Kind:

apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
  name: postgres-data-backup
spec:
  strategies:
    - application:
        apiGroup: apps.cozystack.io
        kind: Postgres
      strategyRef:
        apiGroup: strategy.backups.cozystack.io
        kind: CNPG
        name: postgres-data-cnpg-strategy

Per-application Secrets the tenant must provision in the application namespace:

SecretKeysPurpose
<app>-cnpg-backup-credsACCESS_KEY_ID, ACCESS_SECRET_KEYS3 credentials consumed by barman
<app>-cnpg-backup-ca (only for self-signed endpoints)ca.crtCA bundle the barman client trusts

Drop the endpointCA block in the strategy when your S3 endpoint has a publicly-trusted certificate.

MariaDB

The MariaDB driver delegates to mariadb-operator. Backups materialise as k8s.mariadb.com/v1alpha1 Backup CRs (logical mariadb-dump); restores materialise as Restore CRs that mariadb-import the dump back into the live database.

Create the strategy:

apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: MariaDB
metadata:
  name: mariadb-data-strategy
spec:
  template:
    storage:
      s3:
        bucket: "REPLACE_WITH_COSI_BUCKET_NAME"
        endpoint: "REPLACE_WITH_S3_ENDPOINT"
        prefix: "{{ .Application.metadata.name }}/"
        accessKeyIdSecretKeyRef:
          name: "{{ .Application.metadata.name }}-mariadb-backup-creds"
          key: "AWS_ACCESS_KEY_ID"
        secretAccessKeySecretKeyRef:
          name: "{{ .Application.metadata.name }}-mariadb-backup-creds"
          key: "AWS_SECRET_ACCESS_KEY"
        tls:
          enabled: true
          caSecretKeyRef:
            name: "{{ .Application.metadata.name }}-mariadb-backup-ca"
            key: "ca.crt"
    compression: gzip

The endpoint is path-style without scheme (e.g. seaweedfs-s3.<seaweedfs-namespace>.svc:8333 for the default in-cluster SeaweedFS — substitute the namespace where SeaweedFS is deployed in your environment). Drop the tls block entirely when the endpoint serves a publicly-trusted certificate.

Bind the application Kind:

apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
  name: mariadb-data-backup
spec:
  strategies:
    - application:
        apiGroup: apps.cozystack.io
        kind: MariaDB
      strategyRef:
        apiGroup: strategy.backups.cozystack.io
        kind: MariaDB
        name: mariadb-data-strategy

Per-application Secrets the tenant must provision in the application namespace:

SecretKeysPurpose
<app>-mariadb-backup-credsAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEYS3 credentials consumed by mariadb-operator
<app>-mariadb-backup-ca (only for self-signed endpoints)ca.crtCA bundle for TLS verification

ClickHouse (Altinity strategy)

The Altinity driver does not template a backup CR. It renders a small PodTemplateSpec that runs curl + jq against the in-pod clickhouse-backup HTTP API (port 7171) provided by a sidecar inside every chi-* Pod.

Create the strategy. The template is a PodTemplateSpec driving the sidecar; for the full reference template (with the shell script that POSTs create_remote / restore_remote and polls the action log) see examples/backups/clickhouse/01-create-strategy.sh in the cozystack repo.

apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: Altinity
metadata:
  name: clickhouse-data-altinity-strategy
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: ch-backup-client
          image: alpine:3.19
          env:
            - name: API_USERNAME
              valueFrom:
                secretKeyRef:
                  name: clickhouse-{{ .Release.Name }}-backup-api-auth
                  key: username
            - name: API_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: clickhouse-{{ .Release.Name }}-backup-api-auth
                  key: password
          command: ["/bin/sh", "-c"]
          args:
            # See examples/backups/clickhouse/01-create-strategy.sh for the
            # full script: branches on .Mode (backup|restore) and either
            # POSTs /backup/create_remote or /backup/restore_remote/<name>,
            # then polls /backup/actions for terminal status.
            - |
              # ... (truncated; see linked example)

Bind the application Kind. No parameters are required — the strategy template addresses the sidecar by deterministic Pod DNS and reads S3 credentials from the chart-emitted <release>-backup-s3 Secret directly.

apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
  name: clickhouse-data-backup
spec:
  strategies:
    - application:
        apiGroup: apps.cozystack.io
        kind: ClickHouse
      strategyRef:
        apiGroup: strategy.backups.cozystack.io
        kind: Altinity
        name: clickhouse-data-altinity-strategy

FoundationDB

The FoundationDB driver delegates to the fdb-kubernetes-operator. Each BackupJob materialises a FoundationDBBackup CR (a continuous backup_agent Deployment that streams to a blob store); each RestoreJob materialises a FoundationDBRestore CR (a one-shot fdbrestore against the destination cluster).

Create the strategy:

apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: FoundationDB
metadata:
  name: foundationdb-data-strategy
spec:
  template:
    blobStoreConfiguration:
      accountName: "{{ .Parameters.accountName }}"
      bucket: "{{ .Parameters.bucket }}"
      # BackupName left empty: driver fills with the BackupJob name so each
      # Cozystack BackupJob owns a discrete S3 directory.
      urlParameters:
        - 'secure_connection={{ .Parameters.secureConnection | default "0" }}'
        - 'region={{ .Parameters.region | default "us-east-1" }}'
    snapshotPeriodSeconds: 3600
    customParameters:
      - "--blob_credentials=/var/fdb-blob-credentials/blob_credentials.json"
    backupDeploymentPodTemplateSpec:
      spec:
        containers:
          - name: foundationdb
            volumeMounts:
              - name: blob-credentials
                mountPath: /var/fdb-blob-credentials
                readOnly: true
            resources:
              requests:
                cpu: 100m
                memory: 128Mi
              limits:
                cpu: 200m
                memory: 256Mi
            securityContext:
              runAsUser: 0
        volumes:
          - name: blob-credentials
            secret:
              secretName: "{{ .Application.metadata.name }}-fdb-backup-creds"
              items:
                - key: blob_credentials.json
                  path: blob_credentials.json

Bind the application Kind. Parameters carry blob-store routing; fill them once the tenant has provisioned a Bucket and you have its endpoint and bucket name:

apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
  name: foundationdb-data-backup
spec:
  strategies:
    - application:
        apiGroup: apps.cozystack.io
        kind: FoundationDB
      strategyRef:
        apiGroup: strategy.backups.cozystack.io
        kind: FoundationDB
        name: foundationdb-data-strategy
      parameters:
        accountName: "<api_key>@<endpoint-host>:<port>"
        bucket: "<bucket-name-from-BucketInfo>"
        region: "us-east-1"
        secureConnection: "0"   # "1" for https endpoints

Per-application Secrets the tenant must provision in the application namespace:

SecretKeysPurpose
<app>-fdb-backup-credsblob_credentials.jsonJSON in the fdb-operator’s expected shape (below)

blob_credentials.json must follow this exact shape so backup_agent can resolve accountName:

{
  "accounts": {
    "<api_key>@<endpoint-host>:<port>": {
      "api_key": "<access_key>",
      "secret":  "<secret_key>"
    }
  }
}

Apply and verify

Apply the strategy and BackupClass manifests:

kubectl apply -f <strategy>.yaml
kubectl apply -f <backupclass>.yaml

List the resources:

kubectl get cnpgs.strategy.backups.cozystack.io
kubectl get mariadbs.strategy.backups.cozystack.io
kubectl get altinities.strategy.backups.cozystack.io
kubectl get foundationdbs.strategy.backups.cozystack.io
kubectl get backupclasses

Each strategy should report no error conditions; each BackupClass should list the strategy entries you defined.

Handing off to tenants

Tenants run backups and restores against the BackupClass names you created above using BackupJob, Plan, and RestoreJob resources. Walk them through the Application Backup and Recovery guide; they do not need admin permissions to operate against an existing BackupClass.