Skip to content

Backup and Restore Runbook

What this does

.github/workflows/supabase-backup.yml runs a scheduled pg_dump of the production Supabase Postgres database every day at 02:00 UTC (07:30 IST — chosen for low ops traffic). It produces three files:

  • schema-<UTC-timestamp>.sql.gz — DDL only (supabase db dump --schema-only)
  • data-<UTC-timestamp>.sql.gz — row data only (supabase db dump --data-only)
  • checksums.txt — SHA-256 of both .gz files

These are uploaded as a single workflow artifact named supabase-backup-YYYY-MM-DD with 30-day retention in GitHub Actions.

If the optional AWS_S3_BUCKET secret is set, the same files are also copied to s3://$AWS_S3_BUCKET/supabase-backups/YYYY-MM-DD/. Long-term retention there is governed by your bucket lifecycle policy.

One-time setup: SUPABASE_DB_URL secret

The workflow requires a single secret: the direct Postgres connection string.

  1. Open the Supabase dashboard → Settings → Database
  2. Under Connection string, pick the Direct connection tab (NOT the session/transaction pooler — pg_dump needs a real Postgres connection, not pgbouncer)
  3. Copy the URI and paste your DB password in place of [YOUR-PASSWORD]
  4. In GitHub: Settings → Secrets and variables → Actions → New repository secret
  5. Name: SUPABASE_DB_URL
  6. Value: the full postgres://... URI (with embedded password)

Optional S3 secrets, only if you want long-term archive: AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION.

Triggering a manual backup

UI: Actions → Supabase Backup → Run workflow → Run workflow

CLI:

gh workflow run supabase-backup.yml

Downloading a specific day's backup

  1. Actions tab → select the Supabase Backup workflow
  2. Click the run for the date you want
  3. Scroll to Artifacts → download supabase-backup-YYYY-MM-DD.zip
  4. Unzip — you'll have schema-*.sql.gz, data-*.sql.gz, checksums.txt

Verifying integrity

sha256sum -c checksums.txt

Both lines must report OK. If either fails, do not restore from that artifact — fetch a different day or re-run the manual backup.

Restoring

Decompress first:

gunzip schema-*.sql.gz
gunzip data-*.sql.gz

Set the target DB URL (NEVER point this at production by accident — use a staging or local DB unless this is an actual disaster recovery):

export TARGET_DB_URL="postgres://..."

Full restore (DR scenario)

psql "$TARGET_DB_URL" < schema-*.sql
psql "$TARGET_DB_URL" < data-*.sql

Schema-only restore (test a migration against last night's structure)

psql "$TARGET_DB_URL" < schema-*.sql

Data-only restore (re-seed a DB whose schema already matches)

psql "$TARGET_DB_URL" < data-*.sql

Retention rationale

  • 30 days in GitHub Actions covers the realistic incident window (most data-loss bugs are caught within a sprint).
  • For longer-term archival (regulatory, year-over-year audit), enable the optional S3 rotation and configure your bucket's lifecycle to transition to Glacier after 90 days.
  • Supabase's own PITR (point-in-time recovery) on the Pro plan is the primary recovery mechanism for very recent corruption; this workflow is the offsite, version-controlled backup of last resort.

Security notes

  • GitHub Actions artifacts are scoped to repository collaborators with read access — treat the contents as production credentials.
  • data-*.sql contains every customer record, payment, and partner detail. Do not download to a personal machine; restore directly into a controlled environment.
  • SUPABASE_DB_URL is repo-scoped and should be rotated whenever a collaborator with admin access leaves the team.
  • The dump itself does not contain the database password — only the workflow run's environment does, and GitHub redacts it from logs.