Prisma Postgres Seed: Stop Fighting Your Connection String
By the Seedfast team ·
Prisma Postgres seed in one command — or rewrite your seed script every time the schema changes. Prisma's managed Postgres spins up an empty database in seconds; then you have to fill it. Here are three ways to do it without fighting the connection string.
- Prisma Postgres ships with two TCP connection strings — direct (
db.prisma.io:5432) and pooled (pooled.db.prisma.io:5432). Always seed against the direct URL. The pooled endpoint runs through a connection pooler and breaks long transactions and prepared statements, the same class of failure Neon and Supabase users hit on their poolers - For edge runtimes that can't open a TCP socket (Cloudflare Workers, Vercel Edge), the
@prisma/adapter-ppgpackage — exportingPrismaPostgresAdapter— wraps Prisma's serverless driver and accepts the same direct TCP connection string. For Node-side seeds the plain TCP URL with the standardpgdriver is simpler prisma db seedstill works — but in Prisma ORM v7 it no longer runs automatically duringprisma migrate dev. You have to invoke it explicitly, or call it from your CI step- Prisma Postgres has no production data to copy, no PII, no compliance sign-off — just an empty schema waiting for realistic test data. That's the same starting point developers in fintech, healthcare, and other regulated industries hit on every greenfield project
- When the schema changes — which it will — hand-written seed files break. Seedfast reads the live schema on every run and regenerates valid, connected data, so there's no
seed.tsto maintain
Quick fix if you landed here from a broken seed:
# Pooled URL — breaks seeding (transaction-mode pooler, no prepared statements)
postgres://USER:PASS@pooled.db.prisma.io:5432/?sslmode=require
# Direct URL — use this for seeds, migrations, admin scripts
postgres://USER:PASS@db.prisma.io:5432/?sslmode=require
The rest of this guide walks through psql, prisma db seed, and the @prisma/adapter-ppg serverless-driver path that actually work against Prisma Postgres, and shows how to keep the dataset alive across migrations.
You provisioned Prisma Postgres because it spins up instantly, scales to zero, and ships with the tooling you already use — prisma migrate, the Prisma Client, the new create-prisma scaffold. But the database is empty, and the only realistic data you can copy in from somewhere is production data — which, if you work in a regulated industry, you're not allowed to touch. To seed a Prisma Postgres database, you need three things: the right connection string, a seed strategy that survives schema changes, and a way to make the data look like the real thing without ever leaving development. This guide covers all three.
If you want the framework-agnostic version of this article first, how to seed a database covers the cross-stack fundamentals. This article is specifically about Prisma Postgres — the managed product, not the ORM seed pattern.
A fresh Prisma Postgres database is empty. Migrations create the schema; nothing fills it. Unlike a long-lived shared dev database that accumulates data over months, a Prisma Postgres project is born clean and stays clean until you put something in it.
Three scenarios force the seeding question:
- New project onboarding. A teammate clones the repo, runs
npx prisma migrate dev, and ends up with thirty empty tables. The app boots but every list view is blank. - CI and ephemeral environments. Each CI run, each preview deploy, each rebuilt local stack starts from zero. Without a seed step, end-to-end tests hit empty queries and fail in ways that have nothing to do with the change being tested.
- Demos and staging. You need 500 products, a year of order history, and users that look real. Faker-style random columns aren't good enough — the data has to hold up under a 10-minute customer demo.
Prisma's own docs cover the mechanics of prisma db seed — how to wire up prisma.config.ts, how to run TypeScript seeds with tsx. They don't cover the lifecycle problem: the seed file gets stale on every migration, and the team that wrote it hasn't touched it in six weeks. That's the part that bites.
Every Prisma Postgres database exposes two TCP connection strings. The dashboard shows both:
# Direct (one-to-one, full Postgres protocol, prepared statements work)
postgres://USER:PASS@db.prisma.io:5432/?sslmode=require
# Pooled (transaction-mode pooler, optimized for short-lived requests)
postgres://USER:PASS@pooled.db.prisma.io:5432/?sslmode=require
The difference is the pooled. subdomain. The pooled endpoint sits behind a transaction-mode connection pooler — every query gets a fresh backend connection, which is great for serverless apps with bursty traffic, and bad for everything else. Specifically:
- Prepared statements don't survive across transactions, so any client that prepares a query (most of them) will hit
prepared statement "s1" already existsafter the first reuse - Long-running operations like a multi-statement seed transaction can be terminated when the pool decides to recycle the backend
- Session-level settings (
SET, advisory locks, temporary tables that span statements) silently disappear
For seeding, migrations, and any admin script, use the direct URL. Wire it up explicitly:
# .env.local
DATABASE_URL="postgres://USER:PASS@pooled.db.prisma.io:5432/?sslmode=require" # app runtime
DIRECT_URL="postgres://USER:PASS@db.prisma.io:5432/?sslmode=require" # migrations + seeds
In schema.prisma, point directUrl at the direct URL — Prisma uses it for migrations automatically:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL")
}
If a multi-statement seed dies halfway through with prepared statement "s1" already exists or cached plan must not change result type, the script is talking to the pooler. Point it at the direct URL and re-run.
You may also see the older Prisma Accelerate–style URL — prisma+postgres://accelerate.prisma-data.net/?api_key=... — referenced in some examples. That URL is part of the Accelerate caching layer, not the standard Prisma Postgres connection. The two TCP strings above are what the Prisma Console gives you for a Prisma Postgres database, and they cover every seeding scenario this guide describes.
The simplest path. Write INSERTs, run them with psql.
-- seed.sql
INSERT INTO organizations (id, slug, plan) VALUES
(1, 'acme-corp', 'team'),
(2, 'beta-labs', 'free')
ON CONFLICT (id) DO NOTHING;
INSERT INTO projects (id, organization_id, name, status) VALUES
(1, 1, 'Mobile rewrite', 'active'),
(2, 1, 'Billing refactor', 'planned'),
(3, 2, 'Internal demo', 'active')
ON CONFLICT (id) DO NOTHING;
psql "$DIRECT_URL" -f seed.sql
SSL is mandatory on Prisma Postgres — the sslmode=require parameter on the connection string handles that automatically. Drop it and you'll get connection requires SSL.
For larger reference loads, COPY FROM STDIN is meaningfully faster than row-by-row inserts because it skips per-row parsing and planning:
psql "$DIRECT_URL" -c "COPY products (name, price, category_id) FROM STDIN CSV" < products.csv
ON CONFLICT DO NOTHING keeps the seed idempotent — CI can run it twice without falling over on duplicate keys. For values that should always reflect the latest state, use ON CONFLICT DO UPDATE:
INSERT INTO feature_flags (key, enabled) VALUES
('new_checkout', true)
ON CONFLICT (key) DO UPDATE SET enabled = EXCLUDED.enabled;
Raw SQL is fine for reference data — feature flags, country codes, role definitions, the dozen rows your app reads at boot. It starts breaking once you have ten or fifteen tables with foreign keys, because every migration that adds a NOT NULL column or a new FK forces you to hand-edit the seed. Seed file maintenance covers why this lifecycle is so brutal on active codebases.
Most teams using Prisma Postgres are also using Prisma ORM, so the natural path is prisma db seed. In current Prisma versions, the seed command is configured in prisma.config.ts:
// prisma.config.ts
import { defineConfig, env } from "prisma/config";
export default defineConfig({
schema: "prisma/schema.prisma",
migrations: {
path: "prisma/migrations",
seed: "tsx prisma/seed.ts",
},
datasource: { url: env("DIRECT_URL") },
});
A typical TypeScript seed against Prisma Postgres looks like any other Prisma seed — point the client at the direct URL and use the standard API:
// prisma/seed.ts
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();
async function main() {
const org = await prisma.organization.upsert({
where: { slug: "acme-corp" },
update: {},
create: { slug: "acme-corp", plan: "team" },
});
await prisma.project.upsert({
where: { organizationId_name: { organizationId: org.id, name: "Mobile rewrite" } },
update: {},
create: { organizationId: org.id, name: "Mobile rewrite", status: "active" },
});
}
main()
.catch((e) => {
console.error(e);
process.exit(1);
})
.finally(() => prisma.$disconnect());
npx prisma db seed
Two things changed in Prisma ORM v7 that catch teams off guard:
prisma migrate devno longer runsprisma generateor the seed automatically. The--skip-seedflag is gone because there's nothing to skip — you have to invokeprisma db seedexplicitly when you want itprisma migrate resetstill triggers the seed, so the "wipe and restart" workflow is intact
If you want the seed to run after every migration locally, wrap it in a package.json script:
{
"scripts": {
"db:reset": "prisma migrate reset --force",
"db:migrate": "prisma migrate dev && prisma db seed"
}
}
Most seeds run in Node — locally or in CI — and the plain TCP direct URL is the right choice there. If the seed has to run from an environment that can't open a TCP socket (a Cloudflare Worker, a Vercel Edge function, a managed worker without raw networking), use Prisma's serverless driver adapter:
npm install @prisma/adapter-ppg
// prisma/seed.ts
import { PrismaClient } from "@prisma/client";
import { PrismaPostgresAdapter } from "@prisma/adapter-ppg";
const adapter = new PrismaPostgresAdapter({
connectionString: process.env.DIRECT_URL!,
});
const prisma = new PrismaClient({ adapter });
async function main() {
await prisma.organization.upsert({
where: { slug: "acme-corp" },
update: {},
create: { slug: "acme-corp", plan: "team" },
});
}
main().finally(() => prisma.$disconnect());
The adapter takes the same direct TCP connection string and routes queries over Prisma's HTTP/WebSocket transport, so the seed runs anywhere the Prisma Client runs — but for a regular Node script, plain TCP without the adapter is one fewer dependency.
The bigger problem with both prisma db seed paths is the same one raw SQL has: the values are hand-written. When a migration adds accounts.organization_id NOT NULL, the seed breaks on the next run. Someone — usually whoever picked up the on-call rotation — has to fix it before the rest of the team can boot the app.
If that lifecycle sounds like every Monday morning on your team, Seedfast skips the seed file entirely — generating the dataset from the live schema on every run. The next section covers how it fits into a Prisma Postgres workflow.
Point Seedfast at the Prisma Postgres direct URL, describe the dataset in plain English, and Seedfast reads the schema — tables, columns, constraints, foreign keys — and generates a valid, connected dataset that fits it. There's no seed.sql or seed.ts in the repo, because the schema is the seed plan.
npm install -g seedfast
# or: brew install argon-it/tap/seedfast
# Log in and connect — paste the direct (db.prisma.io) connection string
seedfast connect
# Generate data from the current schema
seedfast seed --scope "two organizations on the team plan, 8 active projects, varied task assignments"
The next migration adds a column? Seedfast reads it on the following run and generates the new field. The next migration adds a whole table? Same — picked up on the next run. The dependency graph and insert order come from the schema, not a hand-written FK list, so circular references and self-references don't need special-casing.
The scope changes per environment, the command doesn't:
# Local dev loop
seedfast seed --scope "2 organizations, 4 users each, 5 projects between them"
# Preview branch for a PR that touches the billing flow
seedfast seed --scope "1 organization mid-trial with overdue invoice and 3 retried payments"
# Sales demo
seedfast seed --scope "500 organizations across 3 plan tiers, 6 months of project history"
Run prisma migrate deploy first to make sure the database is on the latest schema, then run Seedfast against the same direct URL. Prisma owns the structure; Seedfast fills it.
For production reference data — feature flags, country codes, admin roles — keep the versioned SQL or the Prisma seed. That data belongs to the application and should ship through migrations. Seedfast targets the larger, evolving development, CI, staging, and demo datasets that go stale every time the schema moves — not the dozen rows that go to prod.
For teams that can't copy production data into dev — fintech, healthcare, anyone under HIPAA, GDPR, or SOC 2 review — Seedfast removes the blocker that hand-written seeds and anonymization pipelines both leave in place:
- Seedfast generates from the schema, not a snapshot. No production access, no PII pipeline, no anonymization step. Schema metadata (table and column shapes) is sent over the wire to Seedfast's generation service so the model knows what to build; row values are not. The result is realistic test data without ever pulling production rows out of their environment. For the wider workflow, see staging without production data and the HIPAA test data playbook.
- Seedfast fills a database, not a column. Faker hands back random strings. Seedfast generates connected organizations, accounts, transactions, and audit rows with valid foreign keys, distributions that look like a real product, and values that pass a domain check.
- Seedfast scales with one command. The same
seedfast seedruns ten rows for a unit test and half a million for a load test — only the--scopechanges.
The free tier covers small dev datasets with no credit card, so a Friday spike costs nothing — connect and run your first seed in about two minutes, or check pricing before sending it to your CTO.
You're seeding through the pooled URL (pooled.db.prisma.io). The transaction-mode pooler discards prepared statements between transactions, and the driver tries to reuse one that's gone. Switch to the direct URL for the seed.
A server-side Postgres plan cache error: a prepared statement's result type changed underneath it, usually because a migration ran between two seed invocations on the same connection. Run migrations to completion before the seed, and use the direct URL — the pooler can mask this error in confusing ways.
Your connection string is missing sslmode=require. Prisma Postgres rejects non-SSL connections. Add the parameter, or set PGSSLMODE=require in the environment.
A prisma+postgres://... URL slipped into DIRECT_URL. The Prisma Console issues TCP strings (db.prisma.io / pooled.db.prisma.io) for Prisma Postgres — use those for psql, the pg driver, and prisma db seed without an adapter. The Prisma-protocol URL is not part of the standard seeding path.
This is the Prisma ORM v7 behavior change. migrate dev no longer auto-seeds. Either invoke prisma db seed explicitly, or chain it in a script: prisma migrate dev && prisma db seed.
The seed ran before the migration. The order is always migrate then seed, never the reverse. In CI, make prisma migrate deploy (or prisma migrate reset --force on ephemeral branches) a hard prerequisite of the seed step.
Two recurring causes: CI uses the pooled URL because that's what the app uses (switch to the direct URL for the seed step), or the CI environment hasn't run migrations against the freshly-created database (run prisma migrate deploy before the seed).
| Aspect | Raw SQL (psql -f) | prisma db seed | Seedfast |
|---|---|---|---|
| Setup time | None | Already there if you use Prisma | npm install -g seedfast |
| External dependency | None — psql is everywhere | None — already in your stack | Separate CLI plus a network call to Seedfast's generation service |
| File you maintain | seed.sql | seed.ts | None — reads the live schema |
| FK order | Manual | Manual | Automatic |
| Survives migrations | No — manual updates on every change | No — manual updates on every change | Yes — regenerates from the live schema |
| Realistic volumes | Painful past ~50 rows | Works with Faker, still hand-written | Natural-language scope describes the dataset |
| Output stability | Deterministic — committed values | Deterministic if you use fixed seeds | Regenerated each run; pin a seed value when you need byte-stable fixtures |
| Reviewable in PRs | Yes — diff the SQL | Yes — diff the seed script | Indirectly — the scope is reviewable, individual rows are not |
| Good for static config (flags, codes) | Excellent — versioned, reviewed | Excellent — versioned, reviewed | Not the target |
| Good for dev / CI / staging datasets | Manual re-sync on every migration | Manual re-sync on every migration | Regenerates from the live schema |
| What leaves your environment | Nothing — runs locally | Nothing — runs locally | Schema metadata only (table and column shapes); no row data is transmitted |
Pick by job, not by religion. Reference data — the dozen rows the app reads at boot — belongs in a committed SQL file or a small prisma db seed script that goes through code review with the schema change. Seedfast targets the larger, evolving dev / CI / staging datasets that go stale on every migration, where regenerating from the schema is faster than fixing the file. Free tier, no credit card — try it on your Prisma Postgres schema in about two minutes.
Copy the direct connection string from the Prisma Console (the one with db.prisma.io, not pooled.db.prisma.io) and run psql "$DIRECT_URL" -f seed.sql. Make sure sslmode=require is on the URL. For Prisma projects, configure seed: "tsx prisma/seed.ts" in prisma.config.ts and run npx prisma db seed. For schemas with many tables, seedfast seed --scope "..." generates connected data without writing a seed file.
Use the direct URL (db.prisma.io:5432). The pooled URL routes through a transaction-mode connection pooler that breaks prepared statements and can interrupt long seed transactions. Reserve the pooled URL for the application at runtime, where short, bursty queries benefit from the pool.
Yes — install @prisma/adapter-ppg, pass a PrismaPostgresAdapter instance to the PrismaClient constructor, and the seed runs over Prisma's HTTP/WebSocket transport. The adapter takes the same direct TCP connection string from the Prisma Console; you don't need a separate URL. For seeds running on Node (CI, locally), the plain TCP path with pg is simpler and one fewer dependency.
Set DIRECT_URL as a repository secret pointing at the direct connection string. In the workflow, run npx prisma migrate deploy against DIRECT_URL, then run your seed step (either npx prisma db seed or seedfast seed --scope "..."). For application steps in the same job, use the pooled URL via DATABASE_URL. The general pattern is covered in CI/CD database seeding.
Connect Seedfast to the database (seedfast connect, paste the direct URL) and run seedfast seed --scope "<plain English description>". Seedfast reads the live schema and generates a valid, connected dataset every time, so when migrations add columns or tables, the next seed run reflects the new shape automatically — there's no file that goes stale. See database seeder tools compared for how this fits next to the framework-built-in seeders.
Yes — that's the typical setup for teams in regulated industries. Stand up a separate Prisma Postgres database for staging, run migrations against it, then seed it with realistic generated data. No production rows leave the production environment, and the dev/staging path doesn't depend on an anonymization pipeline. Schema metadata is the only thing that crosses the wire to Seedfast's generation service, so review the data path against your security policy the same way you would any new vendor. Staging without production data walks through the workflow.
The short version: use db.prisma.io:5432 for seeds, never pooled.db.prisma.io:5432. Configure DIRECT_URL in prisma.config.ts and reach for @prisma/adapter-ppg only when the seed has to run on an edge runtime. Either pick a maintenance schedule for seed.ts or hand the job to a tool that reads the schema on every run — what doesn't work is pretending a hand-written seed will keep up with twenty migrations a quarter.
If a Friday afternoon spike on the seed file is what brought you here, the direct URL fixes today's break. The deeper fix is to stop hand-rolling the dataset every time the schema moves. Seedfast does that for Prisma Postgres in two minutes — connect to your database and run a seed before you close the laptop.
- How to seed a database: PostgreSQL practical guide — the framework-agnostic version of this article, covering raw SQL, Prisma, Drizzle, TypeORM, and
node-postgres - Database seeding: methods and best practices — the conceptual companion covering reference vs test data, idempotency, and when seed files stop scaling
- Database seeder tools compared — quick reference for Prisma, Drizzle, TypeORM, and standalone tools side by side
- Seed file maintenance — why static seed files fall out of sync with the schema, and what to do about it
- How to seed a Neon database — the Neon sibling to this guide, with the same pooler-vs-direct dance and a deep dive into branching
- How to seed a Supabase database — the Supabase sibling, covering
seed.sql,supabase db reset, and Supabase preview branches - Staging database without production data — the compliance-driven workflow for fintech, healthcare, and anyone who can't pipeline PII into staging
- Get started with Seedfast — connect to your Prisma Postgres database and run your first schema-aware seed