Skip to content

Backup and restore

The full runbook with every detail lives at docs/BACKUP_AND_RESTORE.md. This page is a condensed operations view — enough to act in an incident, with pointers back for specifics.

What runs automatically

.github/workflows/supabase-backup.yml runs daily at 02:00 UTC (07:30 IST — low-traffic for India ops). Each run produces:

  • schema-<UTC-timestamp>.sql.gz — DDL only
  • data-<UTC-timestamp>.sql.gz — row data only
  • checksums.txt — SHA-256 of both .gz files

These are uploaded as a GitHub Actions artifact supabase-backup-YYYY-MM-DD with 30-day retention. If the optional AWS_S3_BUCKET secret is configured, the same files are copied to s3://$AWS_S3_BUCKET/supabase-backups/YYYY-MM-DD/ (S3 lifecycle governs long-term retention).

RPO / RTO

  • RPO (Recovery Point Objective): up to 24 hours of data loss from these artifacts, since dumps are daily. For tighter recovery, rely on Supabase PITR (Point-in-Time Recovery) on the Pro plan — it is the primary recovery mechanism for recent corruption. This artifact workflow is the offsite, version-controlled backup of last resort.
  • RTO (Recovery Time Objective): dominated by restore time. A psql < data-*.sql on a fresh Supabase project typically takes tens of minutes; schema-only restore is minutes.

Warning

Supabase PITR is separate from this workflow and lives in the Supabase dashboard under Database → Backups. PITR on Pro plans retains 7 days of WAL. Both mechanisms are complementary: PITR for recent bugs, artifact backups for long-horizon disaster recovery.

Prerequisites

A single GitHub Actions secret: SUPABASE_DB_URL — the direct connection Postgres URI (not pgbouncer). Set up steps in the parent doc.

Optional S3 archival secrets: AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION.

Triggering a manual backup

Before any risky migration or mass edit, kick off an on-demand backup.

UI: Actions → Supabase Backup → Run workflow → Run workflow

CLI:

gh workflow run supabase-backup.yml

Watch:

gh run watch

Downloading a backup

  1. GitHub → Actions tab → select the Supabase Backup workflow.
  2. Click the run for the date you want.
  3. Scroll to Artifacts → download supabase-backup-YYYY-MM-DD.zip.
  4. Unzip — you will have schema-*.sql.gz, data-*.sql.gz, checksums.txt.

Verifying integrity

Always run this before restoring:

sha256sum -c checksums.txt

Both lines must report OK. If either fails, do not restore from that artifact — fetch another day or trigger a manual backup.

Restore walkthrough

Danger

Never restore into the production database unless this is an active disaster-recovery scenario authorised by CEO/GM. Set TARGET_DB_URL carefully; a typo pointing at prod will overwrite live data.

  1. Decompress:

    gunzip schema-*.sql.gz
    gunzip data-*.sql.gz
    
  2. Set the target DB URL (use staging or a scratch Supabase project unless this is real DR):

    export TARGET_DB_URL="postgres://..."
    
  3. Pick a restore mode:

    Full restore (DR scenario):

    psql "$TARGET_DB_URL" < schema-*.sql
    psql "$TARGET_DB_URL" < data-*.sql
    

    Schema-only (test a migration against last night's structure):

    psql "$TARGET_DB_URL" < schema-*.sql
    

    Data-only (re-seed a DB whose schema already matches):

    psql "$TARGET_DB_URL" < data-*.sql
    
  4. Sanity checks after restore:

    • Row counts on critical tables (bookings, payments, customers, partners).
    • RLS policies are present (SELECT polname FROM pg_policies WHERE schemaname='public';).
    • A smoke test login as a real user succeeds.
    • Trigger a Sentry release in the frontend pointing at the restored backend to verify end-to-end.

Security notes

  • GitHub Actions artifacts are scoped to repository collaborators with read access. Treat their contents as production credentials.
  • data-*.sql contains every customer record, payment detail, partner rate. Do not download to a personal machine; restore directly into a controlled environment.
  • Rotate SUPABASE_DB_URL whenever a collaborator with admin access leaves the team (see security.md).
  • The dump itself does not contain the database password — only the workflow run's environment does, and GitHub redacts it from logs.

See docs/BACKUP_AND_RESTORE.md for the full source of truth (first-time secret setup, S3 rotation, retention rationale).