Backup and restore
The full runbook with every detail lives at docs/BACKUP_AND_RESTORE.md. This page is a condensed operations view — enough to act in an incident, with pointers back for specifics.
What runs automatically
.github/workflows/supabase-backup.yml runs daily at 02:00 UTC (07:30 IST — low-traffic for India ops). Each run produces:
schema-<UTC-timestamp>.sql.gz— DDL onlydata-<UTC-timestamp>.sql.gz— row data onlychecksums.txt— SHA-256 of both.gzfiles
These are uploaded as a GitHub Actions artifact supabase-backup-YYYY-MM-DD with 30-day retention. If the optional AWS_S3_BUCKET secret is configured, the same files are copied to s3://$AWS_S3_BUCKET/supabase-backups/YYYY-MM-DD/ (S3 lifecycle governs long-term retention).
RPO / RTO
- RPO (Recovery Point Objective): up to 24 hours of data loss from these artifacts, since dumps are daily. For tighter recovery, rely on Supabase PITR (Point-in-Time Recovery) on the Pro plan — it is the primary recovery mechanism for recent corruption. This artifact workflow is the offsite, version-controlled backup of last resort.
- RTO (Recovery Time Objective): dominated by restore time. A
psql < data-*.sqlon a fresh Supabase project typically takes tens of minutes; schema-only restore is minutes.
Warning
Supabase PITR is separate from this workflow and lives in the Supabase dashboard under Database → Backups. PITR on Pro plans retains 7 days of WAL. Both mechanisms are complementary: PITR for recent bugs, artifact backups for long-horizon disaster recovery.
Prerequisites
A single GitHub Actions secret: SUPABASE_DB_URL — the direct connection Postgres URI (not pgbouncer). Set up steps in the parent doc.
Optional S3 archival secrets: AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION.
Triggering a manual backup
Before any risky migration or mass edit, kick off an on-demand backup.
UI: Actions → Supabase Backup → Run workflow → Run workflow
CLI:
Watch:
Downloading a backup
- GitHub → Actions tab → select the Supabase Backup workflow.
- Click the run for the date you want.
- Scroll to Artifacts → download
supabase-backup-YYYY-MM-DD.zip. - Unzip — you will have
schema-*.sql.gz,data-*.sql.gz,checksums.txt.
Verifying integrity
Always run this before restoring:
Both lines must report OK. If either fails, do not restore from that artifact — fetch another day or trigger a manual backup.
Restore walkthrough
Danger
Never restore into the production database unless this is an active disaster-recovery scenario authorised by CEO/GM. Set TARGET_DB_URL carefully; a typo pointing at prod will overwrite live data.
-
Decompress:
-
Set the target DB URL (use staging or a scratch Supabase project unless this is real DR):
-
Pick a restore mode:
Full restore (DR scenario):
Schema-only (test a migration against last night's structure):
Data-only (re-seed a DB whose schema already matches):
-
Sanity checks after restore:
- Row counts on critical tables (
bookings,payments,customers,partners). - RLS policies are present (
SELECT polname FROM pg_policies WHERE schemaname='public';). - A smoke test login as a real user succeeds.
- Trigger a Sentry release in the frontend pointing at the restored backend to verify end-to-end.
- Row counts on critical tables (
Security notes
- GitHub Actions artifacts are scoped to repository collaborators with read access. Treat their contents as production credentials.
data-*.sqlcontains every customer record, payment detail, partner rate. Do not download to a personal machine; restore directly into a controlled environment.- Rotate
SUPABASE_DB_URLwhenever a collaborator with admin access leaves the team (seesecurity.md). - The dump itself does not contain the database password — only the workflow run's environment does, and GitHub redacts it from logs.
See docs/BACKUP_AND_RESTORE.md for the full source of truth (first-time secret setup, S3 rotation, retention rationale).