How to Seed a Supabase Database When `supabase db seed` Isn't a Real Command
By the Seedfast team ·
Half of what people type into Google about seeding Supabase starts with a command that doesn't exist. Here is the actual seeding surface — seed.sql, supabase db reset, config.toml, auth.users, preview branches — and Seedfast as a schema-aware option for teams whose seed file stopped keeping up.
Key Takeaways#
supabase db seedis not a Supabase CLI subcommand — seeding runs as a side effect ofsupabase db resetandsupabase start, which execute the files declared under[db.seed]insupabase/config.toml- The default path is a single
supabase/seed.sql, but thesql_pathslist supports globs — splitting seed data into multiple files is idiomatic once the single-file version stops being readable auth.userscannot be safely seeded with raw SQL inserts on a real Supabase project; usesupabase.auth.admin.createUser()with the service role key, orauth.admin.generateLink()for invite-style flows- Preview branches re-apply
seed.sqlon creation, so the file is the branch's whole dataset — that works until the schema drifts from what the file was written against - Seedfast reads the branch's live schema on each run and generates FK-valid connected rows from a plain-English scope — a path many teams switch to once the
seed.sqldiff-per-migration stops fitting into the PR that caused it
You opened the Supabase docs, saw the seed.sql example, thought "there must be a supabase db seed command that runs this", typed it, got unknown command, and closed the tab. That sequence is why this article exists. Supabase has a real seeding story, it just isn't a single subcommand. This guide covers what the actual workflow looks like, why auth.users and remote seeding are the two places it gets painful, and what to use when the seed file stops fitting your schema.
If you want the cross-database fundamentals first, how to seed a database covers the PostgreSQL version without the Supabase-specific surface; this article is specifically about the Supabase CLI, seed.sql, branching, and auth.users.
Is supabase db seed a real command?#
No. The Supabase CLI has supabase db (reset, push, pull, diff, dump, lint, query, start) and supabase seed buckets (a storage-bucket seeder, unrelated to table data). There is no supabase db seed subcommand. Two commands run the seed files on your behalf:
supabase start— seeds on the first start of a local stacksupabase db reset— drops the local database, re-applies migrations, runs the seed files
Both of them read [db.seed] from supabase/config.toml to figure out which files to run. That's the mechanism. Everything else in this article is about what goes into those files and where the default path breaks.
How Supabase actually seeds a database#
A new Supabase project gets this in supabase/config.toml when you run supabase init:
[db.seed]
enabled = true
sql_paths = ['./seed.sql']
The CLI runs those SQL files in order against the local database after migrations finish. The default glob points at a single supabase/seed.sql. You can replace it with a list, a glob, or both:
[db.seed]
enabled = true
sql_paths = [
'./seeds/00-reference.sql',
'./seeds/10-users.sql',
'./seeds/20-*.sql'
]
Files are executed in the order they appear in sql_paths, with globs expanded alphabetically. The "seed fails halfway and leaves the database half-populated" failure mode is real — none of the files are wrapped in a transaction automatically. If you want all-or-nothing, wrap each file's contents in BEGIN; ... COMMIT;.
Via the CLI, seed files run after supabase db reset or the first supabase start — not on subsequent starts (the stack already has data), and not automatically against remote projects. Preview branches re-apply them on branch creation (covered below).
Method 1: Seeding with seed.sql (the default path)#
The simplest seed is one SQL file with INSERTs.
-- supabase/seed.sql
INSERT INTO public.teams (id, name) VALUES
(1, 'Engineering'),
(2, 'Design')
ON CONFLICT (id) DO NOTHING;
INSERT INTO public.posts (id, team_id, title, body) VALUES
(1, 1, 'Launch plan', 'First draft'),
(2, 2, 'Brand voice', 'Work in progress')
ON CONFLICT (id) DO NOTHING;
Run it:
supabase db reset
The reset drops the local database, re-runs every migration under supabase/migrations/, then executes the files in sql_paths. If a statement fails, the reset fails — the error line in stderr tells you which file and which row.
Three things to get right the first time:
Idempotency. ON CONFLICT DO NOTHING on every INSERT with a primary key keeps reruns cheap. Without it, the second supabase db reset on the same day fails on the first duplicate-key error. For reference data that should reflect the latest values (feature flags, role definitions), use ON CONFLICT (...) DO UPDATE SET ... instead.
Row Level Security (RLS). Supabase enables RLS by default on tables you create through the dashboard. The seed file runs as the postgres superuser locally, so RLS does not block it — which is the right default. The gotcha is tests: if the harness connects as anon or an authenticated user, RLS policies apply and rows may be invisible even though they exist. Seeding the data is one problem; writing policies that let the right role read it is a separate one.
Foreign keys. Insert parents before children. The reset does not defer constraints automatically. For schemas where circular FKs exist, either declare the FK DEFERRABLE INITIALLY DEFERRED and wrap inserts in a transaction with SET CONSTRAINTS ALL DEFERRED, or insert a nullable row first and UPDATE the reference in a second pass.
This path scales to a few hundred rows across a few tables. At 20 tables with weekly migrations, every ALTER TABLE ... ADD COLUMN ... NOT NULL on main breaks seed.sql until someone hand-edits it. Method 4 below addresses this directly; seed file maintenance covers the general lifecycle in full, and the Supabase version is identical except that preview branches make the breakage more visible because every PR hits it.
Method 2: TypeScript seeders against the Supabase database#
When the seed outgrows a single SQL file — you need generated emails, UUIDs, timestamps, or any logic more complex than literals — a TypeScript seed script against the Supabase connection string is the next step.
// scripts/seed.ts
import { Client } from 'pg';
import { faker } from '@faker-js/faker';
const client = new Client({ connectionString: process.env.SUPABASE_DB_URL });
async function main() {
await client.connect();
await client.query(`TRUNCATE public.posts, public.teams RESTART IDENTITY CASCADE`);
const { rows: teams } = await client.query(
`INSERT INTO public.teams (name) VALUES ($1), ($2) RETURNING id`,
['Engineering', 'Design']
);
for (const team of teams) {
for (let i = 0; i < 5; i++) {
await client.query(
`INSERT INTO public.posts (team_id, title, body) VALUES ($1, $2, $3)`,
[team.id, faker.company.catchPhrase(), faker.lorem.paragraph()]
);
}
}
await client.end();
}
main().catch((e) => { console.error(e); process.exit(1); });
Run it with tsx scripts/seed.ts after supabase db reset (or after migrations on a remote project). The same pattern works with Prisma's seed.ts, Drizzle's scripts, or Kysely — the only thing that changes is the driver. For the Prisma and Drizzle specifics on a Postgres connection like Supabase's, how to seed a database has the ORM-by-ORM walkthrough.
Connection strings on Supabase: session vs transaction pooler#
Supabase gives every project three connection strings:
- Direct —
db.[ref].supabase.co:5432— IPv6-only unless you enable the IPv4 add-on, straight to Postgres - Session pooler —
aws-0-[region].pooler.supabase.com:5432— PgBouncer in session mode, IPv4-compatible, keeps one Postgres connection per client session - Transaction pooler —
aws-0-[region].pooler.supabase.com:6543— PgBouncer in transaction mode, returns the Postgres connection to the pool after every transaction
For seeding, use direct or session pooler. The transaction pooler (port 6543) discards prepared statements between transactions. Any driver that prepares statements — pg with named queries, Prisma, Drizzle with prepared mode — will fail mid-seed with prepared statement "s1" already exists or cached plan must not change result type. App code that uses short-lived transactions runs fine through 6543; seed scripts do not.
Method 3: Snaplet / supabase-community/seed#
Snaplet was a managed data platform until August 2024, when the company shut down and open-sourced its seed library as supabase-community/seed. The tool introspects your Postgres schema and generates a type-safe seed client:
import { createSeedClient } from '@snaplet/seed';
const seed = await createSeedClient();
await seed.users((x) =>
x(10, () => ({
posts: (x) => x(3),
}))
);
It resolves foreign keys, inserts in dependency order, and produces deterministic output via the copycat library. The values are placeholder-style — names and emails look like IDs, not like real business data — and the client has to be re-generated (npx @snaplet/seed sync, same npm package name as before the move) every time the schema changes.
As of writing, the repo has had limited commit activity since mid-2024, and open community questions about its roadmap on Supabase Discord and Answer Overflow appear to sit without a maintainer response. The library still works on schemas it already supports and is a reasonable fit for teams with an existing Snaplet Seed investment and a schema that doesn't churn much. The full category comparison covers where it lands against other data seeding tools. For teams evaluating their options today, the actively maintained schema-aware alternative is Seedfast — covered next.
Method 4: Seedfast (schema-aware seeding on live Supabase)#
Seedfast reads the live Supabase schema — tables, columns, constraints, FKs — and generates valid, connected rows from a plain-English scope. Seedfast regenerates its output on every run, so there is no client to sync, no seed plan to hand-edit after a migration, and FK order is resolved from the schema rather than written into the seed.
npm install -g seedfast
# or: brew install argon-it/tap/seedfast
seedfast connect
# Paste the Supabase direct or session-pooler connection string when prompted
# (the "Connection string" under Project Settings → Database)
seedfast seed --scope "small SaaS app: 3 orgs, 20 users, 100 posts with realistic activity"
When a migration adds ALTER TABLE posts ADD COLUMN published_at TIMESTAMPTZ NOT NULL, the next seedfast seed picks up the new column without a code change. Seedfast inserts in topological order and handles self-referential tables (employees.manager_id → employees.id) — for circular FK chains, it fills nullable references in a second pass, and for constraints declared DEFERRABLE INITIALLY DEFERRED it issues SET CONSTRAINTS ALL DEFERRED inside the insert transaction so the whole batch commits atomically.
Different scopes for different environments:
# Local development
seedfast seed --scope "2 orgs, 5 users, 10 posts"
# Preview branch for a PR
seedfast seed --scope "3 signed-up users, 5 draft posts, 2 published"
# Staging
seedfast seed --scope "50 orgs, 500 users, 5,000 posts, 3 months of activity"
Seedfast coexists with the seed.sql path rather than replacing it. The reference data your tests depend on literally (the admin@example.com account, a specific feature flag, country-code lookups) stays in a short seed.sql or a trimmed seed.ts. Seedfast fills the relational bulk around those fixtures. The CI/CD database seeding guide covers the pipeline-side wiring — scope strings in GitHub Actions, --output json for programmatic checks, API keys — and applies to Supabase projects unchanged.
Seedfast with auth.users#
Most real Supabase schemas have an FK from public.profiles.user_id to auth.users.id. Seedfast does not write to the auth schema — that belongs to the admin API (covered in the next section). The working pattern is a two-step seed:
# Step 1: create the auth users via the admin API (scripts/seed-users.ts from below)
tsx scripts/seed-users.ts
# Step 2: Seedfast fills the application tables, picking up the existing auth.users.id values
# as parent rows for any FK in public.* that references auth.users
seedfast seed --scope "2 users already exist — fill their profiles, 10 posts each, with comments"
Seedfast reads the existing rows in auth.users the same way it reads any other parent table: it queries the table, picks valid IDs, and uses them as FK targets in the child rows it generates. The scope string can refer to the existing users by role ("existing admin users", "already-onboarded users") to keep the generated data coherent.
A free tier is available for getting started; small schemas generally fit within its limits. Connect the CLI to your Supabase project in about two minutes, no credit card required. For what crosses the wire during a run, see data handling and privacy.
Seeding auth.users without breaking your login flow#
This is the question the official docs answer unevenly and the question the community asks most often. Raw INSERTs into auth.users work locally because the local stack is a full Postgres you have superuser access to. They break on a deployed project because auth.users has triggers, an encrypted_password column that expects a bcrypt hash with Supabase's specific cost factor, and a relationship with auth.identities that the Supabase client populates for you.
The safe paths:
Admin API via supabase-js with the service role key. Runs against any environment — local, staging, production. Produces real users that can log in.
The service role key bypasses Row Level Security and grants full project access. Keep it in server-side environment variables only: never place it in a NEXT_PUBLIC_* variable, commit it to .env in a repo, or import it into a client bundle. The snippet below is a Node-only script — it is never imported from React, Next.js pages, or Edge runtimes.
// scripts/seed-users.ts — Node-only. Never imported from client code.
import { createClient } from '@supabase/supabase-js';
const admin = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!, // server-side env var only
{ auth: { autoRefreshToken: false, persistSession: false } }
);
for (const u of [
{ email: 'alice@example.com', password: 'dev-password-1' },
{ email: 'bob@example.com', password: 'dev-password-2' },
]) {
const { data, error } = await admin.auth.admin.createUser({
email: u.email,
password: u.password,
email_confirm: true, // skip the confirmation email in seeds
});
if (error) throw error;
console.log('created', data.user?.id);
}
CLI token for invite-style flows. Call admin.generateLink({ type: 'invite', email }) and insert the link into the local email inbox (Inbucket at localhost:54324 on the local stack). Useful for testing the accept-invite path.
SQL insert with the function Supabase uses internally. The minimum viable row is longer than it looks, and it will only produce a sign-in-capable user after a matching auth.identities row is added. On a deployed project, the same code creates a half-formed row that cannot sign in and cannot be cleanly re-created via the admin API (the email will already exist) — do not run it there.
-- LOCAL ONLY — DO NOT RUN against a deployed Supabase project.
-- Without a matching auth.identities row, the user cannot sign in with
-- email/password even locally. On a remote project, this creates a
-- broken state that is harder to undo than to prevent. Use
-- supabase.auth.admin.createUser() (above) for any deployed environment.
INSERT INTO auth.users (instance_id, id, aud, role, email, encrypted_password, email_confirmed_at, created_at, updated_at, raw_app_meta_data, raw_user_meta_data)
VALUES (
'00000000-0000-0000-0000-000000000000',
gen_random_uuid(),
'authenticated',
'authenticated',
'alice@example.com',
crypt('dev-password-1', gen_salt('bf')),
NOW(), NOW(), NOW(),
'{"provider":"email","providers":["email"]}',
'{}'
);
For email-login to work, a companion auth.identities row (provider_id, identity_data JSON, provider = 'email') is required — the GoTrue auth server looks it up on sign-in. The admin API creates that row automatically; the raw SQL path does not.
The rule most teams converge on: admin API for the users the app will see, seed.sql for the application tables those users own. Never put service role keys in seed.sql and never ship the admin-API seed script to a client bundle. Once auth.users is populated, run Seedfast on your Supabase project to fill the application tables that reference those users — about two minutes to connect, free tier, no credit card.
Seeding a remote Supabase project#
The CLI's seed files target the local stack. As of writing, a remote Supabase project does not pick them up automatically. supabase db reset --linked exists but drops every row in every table — a destructive operation you likely do not want in a CI workflow against a shared staging database.
The working patterns are:
Run psql or a TypeScript seeder against the direct connection string. Copy the direct (or session-pooler) string from Project Settings → Database and run the seed as you would against any other Postgres. This is the pragmatic answer for staging and demo environments.
psql "$SUPABASE_DB_URL_DIRECT" -f supabase/seed.sql
Use the service role key for auth.users. As above. The admin API works against the deployed project the same way it works locally.
Gate it. A destructive seed run against production is a single command away from deleting real user data. The safe pattern is an environment check at the top of the seed script (if (process.env.SUPABASE_URL.includes('production')) throw new Error('refuse');) and — if the seed is large — an explicit --force flag that has to be typed in.
The CLI-native story for remote seeding remains an open gap. Several community threads track it in the supabase/supabase GitHub discussions, but the feature is not in the CLI as of writing. A seedfast seed against the direct connection string treats local and remote identically — same command, same scope, same result — which is why teams seeding multiple Supabase environments on the same day tend to standardize on it.
Supabase branching and seed data#
Supabase branches are full Postgres databases provisioned per Git branch. The preview branch re-applies your migrations and your seed.sql on creation, so a well-maintained seed.sql gives every preview branch a populated database — the same mental model the local stack uses, in the cloud.
Three things to know:
Seed files apply on branch creation. The sql_paths glob is evaluated, every matching file is executed, and the branch is ready. The time varies with the seed's size; a small reference seed finishes in seconds.
Branches see the seed state of the commit that created them. If you commit a seed.sql change on a feature branch, the preview branch for that PR gets the new seed. Branches opened before the change keep the old state until closed and re-created or until you push the new commit.
Large seeds or flaky seeds surface on every PR. The break-once-then-fix loop that shows up on a local stack once a week shows up on every PR in branching, because every PR creates a new branch and runs the file. Teams move dynamic data off seed.sql at this point. A TypeScript seeder in CI works but requires maintaining code through every migration; seedfast seed reads the new branch's live schema on each run, so preview branches stay populated without a PR-time edit to the seeder itself.
For the branch-level mechanics — copy-on-write, seed-parent-once, reseed-on-drift — Neon branching seed data covers the patterns in Neon-specific detail; how to seed a Neon database is the broader fundamentals guide. Neon and Supabase branching are different mechanisms (CoW branches vs separate databases per branch), but the question of "when do I reseed versus inherit" is the same question.
Common Supabase seeding errors and how to fix them#
prepared statement "s1" already exists#
You are seeding through the transaction pooler (port 6543). Switch the connection string to the direct (db.<ref>.supabase.co:5432) or session-pooler port (5432 on the pooler host).
permission denied for schema auth#
You are trying to INSERT INTO auth.users with a non-superuser role. Locally, use the postgres role. On a deployed project, use the Admin API with the service role key — the schema is owned by the supabase_auth_admin role and ordinary users cannot write to it directly.
duplicate key value violates unique constraint "users_email_key" on re-seed#
The seed is not idempotent. Add ON CONFLICT (email) DO NOTHING or ON CONFLICT (email) DO UPDATE SET ... to every INSERT that hits a unique index, or run supabase db reset which truncates before applying the seed.
new row violates row-level security policy when the app reads seeded data#
The seed ran as postgres and the rows exist, but the test harness connects as anon or authenticated and the RLS policies are filtering them out. Check that the inserted user_id/team_id columns match a JWT sub your tests authenticate with, or run the app-side reads with the service role key in CI only.
relation "auth.users" does not exist in a migration#
The migration runs before the auth schema is ready. In local branches, this can happen if a migration uses a cross-schema reference and the extension load order is wrong. In CI, it usually means migrations are running against a plain Postgres container rather than the Supabase stack — run supabase start before applying migrations so auth, storage, and friends exist.
cached plan must not change result type#
A schema change invalidated a prepared statement cached in the pooler. Switch to the direct port for the seed, run migrations before seeding, and — if the error repeats — reconnect the driver to drop the cached plans.
seed.sql vs TypeScript vs Snaplet vs Seedfast on Supabase#
| Aspect | seed.sql | TypeScript seeder | Snaplet / supabase-community/seed | Seedfast |
|---|---|---|---|---|
| Setup | In every Supabase project by default | One file, one driver install | Install, sync, write seed plans | npm install -g seedfast |
| What you maintain | The file | The script | The seed plans + re-run sync on schema change | Nothing — reads the live schema |
| FK order | Manual | Manual | Automatic (from introspection) | Automatic |
| Survives migrations | No — hand edit on every schema change | No — hand edit on every schema change | Only after sync (client regen step) | Yes — re-reads the live schema each run |
Handles auth.users | Local only; breaks on deployed | With admin API, yes | Not for auth.users specifically | Focus is application tables; auth via admin API |
| Works on remote | Manual (psql against direct URL) | Same | Works if you point the client at the remote | Same — seedfast seed against the direct URL |
| Good for reference/literal rows | Excellent | Excellent | Overkill | Not the target |
| Good for relational bulk | Painful past ~100 rows / ~10 tables | Works, high code maintenance | Works, high schema-sync maintenance | The target use case |
Pick based on the job. Reference and fixture rows live in seed.sql. Authenticated users use the admin API. Relational bulk that has to stay valid through migrations is the part that stops fitting into either — Seedfast handles that without a seed file to keep in sync. Try Seedfast free on your Supabase schema — no credit card, about two minutes to set up.
Frequently asked questions#
Is supabase db seed a real command?#
No. The Supabase CLI has supabase db reset and supabase start, both of which run the seed files declared under [db.seed] in supabase/config.toml. There is a separate supabase seed buckets for storage, unrelated to table data. Typing supabase db seed returns unknown command because the functionality lives on db reset instead.
How does Supabase seed a database automatically?#
On supabase db reset and on the first supabase start, the CLI reads the sql_paths list from supabase/config.toml under [db.seed], expands any globs, and runs the files in order against the local database after migrations finish. The default list is ['./seed.sql']; you can replace it with multiple paths to split a large seed across files.
How do I seed auth.users in Supabase?#
Call supabase.auth.admin.createUser({ email, password, email_confirm: true }) with the service role key. That path works locally and on deployed projects, produces real users that can sign in, and populates the companion auth.identities row that the GoTrue auth server requires for email/password sign-in. Raw INSERTs into auth.users create a row but not a working login — even locally, the user cannot sign in without a matching auth.identities record, and on a remote project the half-formed state is harder to undo than to prevent.
Can I run supabase db reset against a remote project?#
supabase db reset --linked exists and resets the remote database, but it drops every row in every table — a destructive operation you likely do not want in CI against a shared staging environment. The common pattern for remote seeding is psql "$SUPABASE_DB_URL_DIRECT" -f seed.sql (or a TypeScript seeder) gated behind an environment check.
Do Supabase preview branches run seed.sql?#
Yes. When a preview branch is created, Supabase re-applies migrations and executes the sql_paths files declared in config.toml. That means the branch comes up with whatever your seed.sql describes — the same data mental model as the local stack, per PR.
What's the difference between the session pooler and the transaction pooler for seeding?#
The session pooler (port 5432 on the pooler.supabase.com host) keeps one Postgres connection per client session, so prepared statements and session-level SET statements survive. The transaction pooler (port 6543) returns the connection after every transaction, which breaks prepared statements mid-seed. Use session pooler or direct for seeds; transaction pooler is for short-lived application queries.
How do I seed more than one SQL file?#
Replace sql_paths = ['./seed.sql'] with a list or a glob:
[db.seed]
enabled = true
sql_paths = ['./seeds/00-reference.sql', './seeds/10-*.sql']
Files run in listed order; globs expand alphabetically. Wrap each file's contents in BEGIN; ... COMMIT; if you want all-or-nothing semantics, because the CLI does not wrap the whole sequence in one transaction.
Does Seedfast replace seed.sql?#
No — it coexists. Reference and fixture rows that tests reference by literal value (admin accounts, feature flags, country codes) stay in a short seed.sql or seed.ts. Seedfast fills the relational bulk around them — orders, activity, posts, events — using the live schema, so the part that normally breaks on every migration is the part it takes off your hands.
Is Seedfast free to try on a Supabase project?#
Yes. A free tier is available, with no credit card required to connect and run a first seed — small schemas generally fit within its limits. Point seedfast connect at your Supabase direct or session-pooler connection string; if the schema exceeds the free tier, the CLI surfaces which limit was hit and links to pricing, and full details are on the Seedfast pricing page.
Try Seedfast on your Supabase schema#
If the part that hurts is seed.sql breaking every migration, start here:
npm install -g seedfast
seedfast connect # paste the direct or session-pooler connection string
seedfast seed --scope "small SaaS app: 3 orgs, 20 users, 100 posts with activity"
No seed.sql edit. No seed.ts regen. The schema is the source of truth, and Seedfast reads it fresh on every run. Get started with Seedfast — about two minutes to connect, free tier, no credit card.
Related guides#
- Database seeding methods and best practices — the conceptual pillar: reference vs test data, idempotency, when seed files stop scaling
- How to seed a database: PostgreSQL practical guide — the framework-agnostic version with Prisma, Drizzle, TypeORM, and raw
pg - How to seed a Neon database — the Neon-specific sibling to this article, covering pooled vs direct, branch inheritance, and the three seeding methods
- Neon branching seed data — the branch-level patterns (seed-parent-once, drift-reseed, schema-only rescue) for Neon's copy-on-write model
- Seed file maintenance — why static
seed.sqldrifts from the schema and what the alternatives look like - Data seeding tools compared — the full comparison of Faker, ORM seeders, enterprise anonymization, and schema-aware generators
- Database seeding in CI/CD — pipeline wiring for Supabase, Neon, and any Postgres
- Get started with Seedfast — connect to your Supabase project and run the first schema-aware seed