Database migrations
Every schema and seed change to the Supabase Postgres database lives as a SQL file in supabase/migrations/. This runbook covers how to author, apply, push, and undo migrations.
Conventions
From CLAUDE.md:
- Filename:
YYYYMMDDHHMMSS_description.sql(UTC timestamp prefix, snake_case description). The timestamp determines apply order. - Idempotent — every statement must be safe to run more than once:
CREATE TABLE IF NOT EXISTS,CREATE INDEX IF NOT EXISTS,ALTER TABLE ... ADD COLUMN IF NOT EXISTSCREATE OR REPLACE FUNCTION,CREATE OR REPLACE VIEWINSERT ... ON CONFLICT DO NOTHINGorINSERT ... WHERE NOT EXISTS (...)- Drops use
DROP ... IF EXISTS
- RLS policies on any table gated by permissions must call
auth_user_has_permission('x.y'). Never hardcode role names (CEO,GM, etc.) in RLS — that bypasses the permission matrix and silently breaks when roles are reshuffled.
Danger
Once a migration is merged to main and applied to production, it is immutable. Do not edit the file, re-timestamp it, or delete it. If the change is wrong, write a new forward migration that fixes it.
Creating a new migration
Option A — Supabase CLI
This stamps the current UTC timestamp and creates an empty file at supabase/migrations/YYYYMMDDHHMMSS_add_airline_capacity_column.sql.
Option B — hand-written
Copy an existing file as a template, rename with today's UTC timestamp. Useful when you want to match formatting conventions in nearby migrations.
Fill in the SQL. Prefer small, focused migrations (one feature per file) over omnibus scripts.
Applying locally
Two modes, depending on how wrecked your local DB is:
-
Full reset (fastest when local DB is out of sync):
Drops the local Postgres, replays every migration from scratch, then runs seeds. Safe onlocalhost, catastrophic on production — the CLI guards this by readingsupabase/config.toml, but double-check your shell is not pointed at a linked remote. -
Incremental apply (preserves local data):
Applies only migrations newer than what the local DB has already run.
Warning
If you write a non-idempotent migration, db reset will catch it (clean DB → success) but migration up on a partially-applied DB will fail with relation already exists or duplicate key. Always exercise both paths before merging.
Pushing to remote
The project is linked to Supabase project yzpfwdxpwalmfuodkxni (see supabase/config.toml).
- Confirm the link:
-
Preview what will be applied:
-
Push:
For bulk/first-time catch-up (many migrations behind):--include-allignores theschema_migrationsbookkeeping table;--yesskips the interactive confirm. Use this only when you know every pending migration is safe.
Danger
supabase db push runs every pending migration against production. There is no dry-run that actually proves idempotence — if one statement fails halfway, later statements in the same file may or may not have run. Always:
- Apply the migration to your local Supabase (
supabase db reset) first. - Run the test suite against it (
npm test). - Then push to remote during a low-traffic window.
Seeding the permission matrix
When you add a new permission (see CLAUDE.md → Permissions), the same PR must include a migration that:
- Inserts the permission row into the permissions table.
- Grants it to the appropriate roles.
Pattern (adapted from existing seed migrations):
-- Seed the permission
INSERT INTO permissions (name, description, category)
VALUES ('finance.journals.approve', 'Approve manual journal entries', 'finance')
ON CONFLICT (name) DO NOTHING;
-- Grant it to roles
INSERT INTO role_permissions (role, permission_name)
SELECT r.role, 'finance.journals.approve'
FROM (VALUES ('CEO'), ('GM'), ('IT_ADMIN'), ('FINANCE_MANAGER')) AS r(role)
ON CONFLICT DO NOTHING;
The permissions matrix test (src/test/permissions.matrix.test.ts) will fail the build if the code references a permission that was never seeded, or vice versa.
RLS conventions
Every table touched by a gated feature needs RLS enabled and policies that delegate to auth_user_has_permission:
ALTER TABLE bookings ENABLE ROW LEVEL SECURITY;
DROP POLICY IF EXISTS bookings_read ON bookings;
CREATE POLICY bookings_read
ON bookings
FOR SELECT
USING (auth_user_has_permission('bookings.view'));
DROP POLICY IF EXISTS bookings_write ON bookings;
CREATE POLICY bookings_write
ON bookings
FOR ALL
USING (auth_user_has_permission('bookings.edit'))
WITH CHECK (auth_user_has_permission('bookings.edit'));
Warning
Do not write USING (role = 'CEO'). This is the same anti-pattern CLAUDE.md bans in application code. Grant a permission to the role via role_permissions instead. See docs/PERMISSIONS.md and the reference migration 20260416020000_remove_permission_bypass.sql.
Re-run safety
A well-written migration should be re-runnable without errors because (a) the CI / local dev loop re-runs everything after db reset, and (b) if production apply partially fails, the retry must not double-insert.
Checklist before merging:
- [ ] Every
CREATEusesIF NOT EXISTS. - [ ] Every
ALTER TABLE ... ADD COLUMNusesIF NOT EXISTS. - [ ] Every
INSERThas anON CONFLICT DO NOTHINGor aWHERE NOT EXISTSguard. - [ ] Every
DROPusesIF EXISTS. - [ ] Functions and policies use
CREATE OR REPLACE/DROP POLICY IF EXISTS ... CREATE POLICY. - [ ] Running
supabase db resetlocally succeeds. - [ ]
npm testpasses, including the permissions matrix test.
Renaming or deleting a permission
From CLAUDE.md: update PERMISSIONS.md, every code reference, and ship a rename/drop migration in the same PR.
-- Rename: preserve grants by updating by primary key
UPDATE permissions
SET name = 'finance.journals.review'
WHERE name = 'finance.journals.approve';
UPDATE role_permissions
SET permission_name = 'finance.journals.review'
WHERE permission_name = 'finance.journals.approve';
-- Drop (only after every code reference is gone)
DELETE FROM role_permissions WHERE permission_name = 'legacy.flag';
DELETE FROM permissions WHERE name = 'legacy.flag';
After pushing the migration, run npm test — the permissions matrix test validates drift against docs/PERMISSIONS.md and src/lib/api.ts / src/App.tsx.
When a migration fails in production
- Read the error from
supabase db pushoutput carefully. The failing statement and line are in the stderr. - If the error is recoverable (e.g. you added a column with a non-nullable default on a huge table and it timed out):
- Write a new migration that completes the job in smaller steps.
- Do not edit the original file.
- If the migration partially applied and the DB is in a broken state:
- Connect directly with
psqlusingSUPABASE_DB_URL(same secret the backup workflow uses). - Inspect
schema_migrationsto see what recorded as applied. - Manually apply the remaining statements, then write a follow-up no-op migration so
mainand prod stay in sync.
- Connect directly with
- If the failure is catastrophic and you need to restore data, see
backup-restore.md.