Backup and Restore Runbook
What this does
.github/workflows/supabase-backup.yml runs a scheduled pg_dump of the
production Supabase Postgres database every day at 02:00 UTC
(07:30 IST — chosen for low ops traffic). It produces three files:
schema-<UTC-timestamp>.sql.gz— DDL only (supabase db dump --schema-only)data-<UTC-timestamp>.sql.gz— row data only (supabase db dump --data-only)checksums.txt— SHA-256 of both.gzfiles
These are uploaded as a single workflow artifact named
supabase-backup-YYYY-MM-DD with 30-day retention in GitHub Actions.
If the optional AWS_S3_BUCKET secret is set, the same files are also
copied to s3://$AWS_S3_BUCKET/supabase-backups/YYYY-MM-DD/. Long-term
retention there is governed by your bucket lifecycle policy.
One-time setup: SUPABASE_DB_URL secret
The workflow requires a single secret: the direct Postgres connection string.
- Open the Supabase dashboard → Settings → Database
- Under Connection string, pick the Direct connection tab
(NOT the session/transaction pooler —
pg_dumpneeds a real Postgres connection, not pgbouncer) - Copy the URI and paste your DB password in place of
[YOUR-PASSWORD] - In GitHub: Settings → Secrets and variables → Actions → New repository secret
- Name:
SUPABASE_DB_URL - Value: the full
postgres://...URI (with embedded password)
Optional S3 secrets, only if you want long-term archive:
AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION.
Triggering a manual backup
UI: Actions → Supabase Backup → Run workflow → Run workflow
CLI:
Downloading a specific day's backup
- Actions tab → select the Supabase Backup workflow
- Click the run for the date you want
- Scroll to Artifacts → download
supabase-backup-YYYY-MM-DD.zip - Unzip — you'll have
schema-*.sql.gz,data-*.sql.gz,checksums.txt
Verifying integrity
Both lines must report OK. If either fails, do not restore from that
artifact — fetch a different day or re-run the manual backup.
Restoring
Decompress first:
Set the target DB URL (NEVER point this at production by accident — use a staging or local DB unless this is an actual disaster recovery):
Full restore (DR scenario)
Schema-only restore (test a migration against last night's structure)
Data-only restore (re-seed a DB whose schema already matches)
Retention rationale
- 30 days in GitHub Actions covers the realistic incident window (most data-loss bugs are caught within a sprint).
- For longer-term archival (regulatory, year-over-year audit), enable the optional S3 rotation and configure your bucket's lifecycle to transition to Glacier after 90 days.
- Supabase's own PITR (point-in-time recovery) on the Pro plan is the primary recovery mechanism for very recent corruption; this workflow is the offsite, version-controlled backup of last resort.
Security notes
- GitHub Actions artifacts are scoped to repository collaborators with read access — treat the contents as production credentials.
data-*.sqlcontains every customer record, payment, and partner detail. Do not download to a personal machine; restore directly into a controlled environment.SUPABASE_DB_URLis repo-scoped and should be rotated whenever a collaborator with admin access leaves the team.- The dump itself does not contain the database password — only the workflow run's environment does, and GitHub redacts it from logs.