You are viewing documentation for Cozystack next, which is currently in beta. For the latest stable version, see the v1.3 documentation.
Managed Application Backup Configuration
This guide is for cluster administrators who configure backup strategies for Cozystack-managed database applications: Postgres, MariaDB, ClickHouse, and FoundationDB. Once strategies and BackupClass resources are in place, tenants run backups and restores by creating
BackupJob, Plan, and RestoreJob resources with no further admin action.
This page covers data-only backups driven by each operator’s native backup mechanism (CloudNativePG barman, mariadb-operator dumps, Altinity clickhouse-backup, FoundationDB backup_agent). The apps.cozystack.io/* CR, its HelmRelease, chart values, and operator-managed Secrets are not captured by these strategies.
For backups that bundle Helm release + CRs + PVC snapshots (used by VMInstance / VMDisk), see Velero Backup Configuration.
Prerequisites
- Administrator access to the Cozystack (management) cluster.
- The
backup-controllerandbackupstrategy-controllercomponents are installed and running. - S3-compatible storage reachable from the management cluster — either the in-cluster SeaweedFS provisioned via the
Bucketapplication, or any external S3 endpoint. - The corresponding upstream operator is deployed for each application Kind you want to back up: CloudNativePG, mariadb-operator, ClickHouse operator, or fdb-kubernetes-operator. These ship with Cozystack by default.
How a managed-application strategy works
The flow on every BackupJob:
- A tenant creates a
BackupJob(or aPlanthat materialises one on a cron) that references aBackupClassand anapps.cozystack.io/<Kind>application. - The core backup controller resolves the
BackupClassand matches the application Kind to a driver-specificstrategy.backups.cozystack.io/<Kind>strategy. - The driver renders its strategy template against the live application object (
.Application) and the BackupClass parameters (.Parameters), then creates the operator-native backup CR (Backupfor mariadb,FoundationDBBackupfor FoundationDB, an HTTP call against the in-pod sidecar for ClickHouse, a barman-driven snapshot incnpg.iofor Postgres). - On success the driver creates a Cozystack
Backupartefact in the same namespace;RestoreJobresources reference that artefact later.
BackupClass is cluster-scoped: a single instance covers every tenant namespace.
Per-driver setup
The strategies below are written for the in-cluster SeaweedFS Bucket application. If you use external S3 storage, drop the endpointCA / TLS sections and point the endpoint at your provider.
Postgres (CNPG strategy)
The CNPG driver delegates to CloudNativePG’s native barman backup. Each BackupJob is a barman snapshot streamed to S3; RestoreJob recreates the cnpg.io/Cluster from the archive.
Create the strategy:
apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: CNPG
metadata:
name: postgres-data-cnpg-strategy
spec:
template:
serverName: "{{ .Application.metadata.name }}"
barmanObjectStore:
destinationPath: "s3://REPLACE_WITH_COSI_BUCKET_NAME/{{ .Application.metadata.name }}/"
endpointURL: "https://REPLACE_WITH_S3_ENDPOINT"
retentionPolicy: "30d"
endpointCA:
secretRef:
name: "{{ .Application.metadata.name }}-cnpg-backup-ca"
key: "ca.crt"
s3Credentials:
secretRef:
name: "{{ .Application.metadata.name }}-cnpg-backup-creds"
data:
compression: gzip
wal:
compression: gzip
Bind the application Kind:
apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
name: postgres-data-backup
spec:
strategies:
- application:
apiGroup: apps.cozystack.io
kind: Postgres
strategyRef:
apiGroup: strategy.backups.cozystack.io
kind: CNPG
name: postgres-data-cnpg-strategy
Per-application Secrets the tenant must provision in the application namespace:
| Secret | Keys | Purpose |
|---|---|---|
<app>-cnpg-backup-creds | ACCESS_KEY_ID, ACCESS_SECRET_KEY | S3 credentials consumed by barman |
<app>-cnpg-backup-ca (only for self-signed endpoints) | ca.crt | CA bundle the barman client trusts |
Drop the endpointCA block in the strategy when your S3 endpoint has a publicly-trusted certificate.
MariaDB
The MariaDB driver delegates to
mariadb-operator. Backups materialise as k8s.mariadb.com/v1alpha1 Backup CRs (logical mariadb-dump); restores materialise as Restore CRs that mariadb-import the dump back into the live database.
Create the strategy:
apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: MariaDB
metadata:
name: mariadb-data-strategy
spec:
template:
storage:
s3:
bucket: "REPLACE_WITH_COSI_BUCKET_NAME"
endpoint: "REPLACE_WITH_S3_ENDPOINT"
prefix: "{{ .Application.metadata.name }}/"
accessKeyIdSecretKeyRef:
name: "{{ .Application.metadata.name }}-mariadb-backup-creds"
key: "AWS_ACCESS_KEY_ID"
secretAccessKeySecretKeyRef:
name: "{{ .Application.metadata.name }}-mariadb-backup-creds"
key: "AWS_SECRET_ACCESS_KEY"
tls:
enabled: true
caSecretKeyRef:
name: "{{ .Application.metadata.name }}-mariadb-backup-ca"
key: "ca.crt"
compression: gzip
The endpoint is path-style without scheme (e.g. seaweedfs-s3.<seaweedfs-namespace>.svc:8333 for the default in-cluster SeaweedFS — substitute the namespace where SeaweedFS is deployed in your environment). Drop the tls block entirely when the endpoint serves a publicly-trusted certificate.
Bind the application Kind:
apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
name: mariadb-data-backup
spec:
strategies:
- application:
apiGroup: apps.cozystack.io
kind: MariaDB
strategyRef:
apiGroup: strategy.backups.cozystack.io
kind: MariaDB
name: mariadb-data-strategy
Per-application Secrets the tenant must provision in the application namespace:
| Secret | Keys | Purpose |
|---|---|---|
<app>-mariadb-backup-creds | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY | S3 credentials consumed by mariadb-operator |
<app>-mariadb-backup-ca (only for self-signed endpoints) | ca.crt | CA bundle for TLS verification |
backup.* block in apps.cozystack.io/MariaDB (the legacy mariadb-dump + restic path) is deprecated in favour of this BackupClass flow. Existing tenants with backup.enabled=true continue to render the legacy resources unchanged.ClickHouse (Altinity strategy)
The Altinity driver does not template a backup CR. It renders a small PodTemplateSpec that runs curl + jq against the in-pod
clickhouse-backup HTTP API (port 7171) provided by a sidecar inside every chi-* Pod.
backup.enabled=true on every ClickHouse application instance — that flag is what materialises the in-pod sidecar and the clickhouse-<release>-backup-api-auth Secret the strategy authenticates with. Unlike MariaDB/FoundationDB, ClickHouse’s chart-level backup.* block is not deprecated; the BackupClass flow piggybacks on the same sidecar.Create the strategy. The template is a PodTemplateSpec driving the sidecar; for the full reference template (with the shell script that POSTs create_remote / restore_remote and polls the action log) see
examples/backups/clickhouse/01-create-strategy.sh in the cozystack repo.
apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: Altinity
metadata:
name: clickhouse-data-altinity-strategy
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ch-backup-client
image: alpine:3.19
env:
- name: API_USERNAME
valueFrom:
secretKeyRef:
name: clickhouse-{{ .Release.Name }}-backup-api-auth
key: username
- name: API_PASSWORD
valueFrom:
secretKeyRef:
name: clickhouse-{{ .Release.Name }}-backup-api-auth
key: password
command: ["/bin/sh", "-c"]
args:
# See examples/backups/clickhouse/01-create-strategy.sh for the
# full script: branches on .Mode (backup|restore) and either
# POSTs /backup/create_remote or /backup/restore_remote/<name>,
# then polls /backup/actions for terminal status.
- |
# ... (truncated; see linked example)
Bind the application Kind. No parameters are required — the strategy template addresses the sidecar by deterministic Pod DNS and reads S3 credentials from the chart-emitted <release>-backup-s3 Secret directly.
apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
name: clickhouse-data-backup
spec:
strategies:
- application:
apiGroup: apps.cozystack.io
kind: ClickHouse
strategyRef:
apiGroup: strategy.backups.cozystack.io
kind: Altinity
name: clickhouse-data-altinity-strategy
FoundationDB
The FoundationDB driver delegates to the
fdb-kubernetes-operator. Each BackupJob materialises a FoundationDBBackup CR (a continuous backup_agent Deployment that streams to a blob store); each RestoreJob materialises a FoundationDBRestore CR (a one-shot fdbrestore against the destination cluster).
FoundationDBBackup on the same FoundationDB before starting a new one, and each Cozystack BackupJob owns a discrete blob-store directory keyed by its name.Create the strategy:
apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: FoundationDB
metadata:
name: foundationdb-data-strategy
spec:
template:
blobStoreConfiguration:
accountName: "{{ .Parameters.accountName }}"
bucket: "{{ .Parameters.bucket }}"
# BackupName left empty: driver fills with the BackupJob name so each
# Cozystack BackupJob owns a discrete S3 directory.
urlParameters:
- 'secure_connection={{ .Parameters.secureConnection | default "0" }}'
- 'region={{ .Parameters.region | default "us-east-1" }}'
snapshotPeriodSeconds: 3600
customParameters:
- "--blob_credentials=/var/fdb-blob-credentials/blob_credentials.json"
backupDeploymentPodTemplateSpec:
spec:
containers:
- name: foundationdb
volumeMounts:
- name: blob-credentials
mountPath: /var/fdb-blob-credentials
readOnly: true
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
securityContext:
runAsUser: 0
volumes:
- name: blob-credentials
secret:
secretName: "{{ .Application.metadata.name }}-fdb-backup-creds"
items:
- key: blob_credentials.json
path: blob_credentials.json
Bind the application Kind. Parameters carry blob-store routing; fill them once the tenant has provisioned a Bucket and you have its endpoint and bucket name:
apiVersion: backups.cozystack.io/v1alpha1
kind: BackupClass
metadata:
name: foundationdb-data-backup
spec:
strategies:
- application:
apiGroup: apps.cozystack.io
kind: FoundationDB
strategyRef:
apiGroup: strategy.backups.cozystack.io
kind: FoundationDB
name: foundationdb-data-strategy
parameters:
accountName: "<api_key>@<endpoint-host>:<port>"
bucket: "<bucket-name-from-BucketInfo>"
region: "us-east-1"
secureConnection: "0" # "1" for https endpoints
Per-application Secrets the tenant must provision in the application namespace:
| Secret | Keys | Purpose |
|---|---|---|
<app>-fdb-backup-creds | blob_credentials.json | JSON in the fdb-operator’s expected shape (below) |
blob_credentials.json must follow this exact shape so backup_agent can resolve accountName:
{
"accounts": {
"<api_key>@<endpoint-host>:<port>": {
"api_key": "<access_key>",
"secret": "<secret_key>"
}
}
}
backup.* block in apps.cozystack.io/FoundationDB is deprecated in favour of this BackupClass flow. Existing tenants with backup.enabled=true continue to render the legacy FoundationDBBackup CR unchanged.Apply and verify
Apply the strategy and BackupClass manifests:
kubectl apply -f <strategy>.yaml
kubectl apply -f <backupclass>.yaml
List the resources:
kubectl get cnpgs.strategy.backups.cozystack.io
kubectl get mariadbs.strategy.backups.cozystack.io
kubectl get altinities.strategy.backups.cozystack.io
kubectl get foundationdbs.strategy.backups.cozystack.io
kubectl get backupclasses
Each strategy should report no error conditions; each BackupClass should list the strategy entries you defined.
Handing off to tenants
Tenants run backups and restores against the BackupClass names you created above using BackupJob, Plan, and RestoreJob resources. Walk them through the
Application Backup and Recovery guide; they do not need admin permissions to operate against an existing BackupClass.