All posts

Snaplet Seed Shut Down. Here's Where Your Seed Data Goes Next

By the Seedfast team ·

Snaplet shut down on August 31, 2024. The code is open source, but the fork has barely moved. If @snaplet/seed still lives in your package.json, you're running a dependency that hasn't had a real release in nearly two years. Here's what that actually means and a Snaplet Seed alternative you can migrate to in an afternoon.

Key Takeaways#

  • Snaplet Seed was formally shut down on August 31, 2024; the last meaningful release (@snaplet/seed v0.98.0) landed on July 30, 2024 — before the sunset announcement — and little has moved since
  • The open-source continuation lives at github.com/supabase-community/seed; it is usable, but release activity has slowed and there is no published roadmap
  • Basecut (which also ranks for "Snaplet alternative") is positioned as a replacement for @snaplet/snapshot — the anonymization product. If you were running @snaplet/seed instead, the data-generation product, that is a different migration path
  • A real Snaplet Seed alternative has to replace four things: the schema reader, the foreign-key resolver, the TypeScript seed.ts workflow, and the CI/CD hooks — swapping one in isolation leaves the others broken
  • Seedfast replaces the whole stack without a codegen step or a seed.config.ts file: connect to Postgres, describe the scope in plain English, and the seeding runs against the live schema. Free tier available.

What actually happened to Snaplet#

Snaplet was a YC-backed company building developer tools for realistic test data since 2021. It shipped two products — @snaplet/snapshot (snapshot and anonymize production) and @snaplet/seed (generate data from schema) — both of which found genuine traction in the PostgreSQL and Supabase ecosystem.

In summer 2024, the founders announced the shutdown with an August 31 cutoff for the hosted service. The tools were open-sourced, the team joined Supabase, and maintenance of @snaplet/seed was handed to the Supabase community under github.com/supabase-community/seed.

That part is fine. Snaplet could have simply archived everything; instead they open-sourced it and found a maintainer. What's less fine is what happened next. The last meaningful release is v0.98.0, shipped July 30, 2024 — before the shutdown announcement. Since the community handover, activity has been issues and minor fixes, not feature work. The repo is alive; the roadmap isn't.

If your team adopted @snaplet/seed between 2022 and 2024, the tool you evaluated — one with an active company, paid support, and a feature roadmap — is not the tool you're running today. That gap is the whole reason this article exists.

Your options if you're still on @snaplet/seed#

Teams currently running @snaplet/seed (or its community fork) typically have four paths. All of them are real; three of them are slow.

Stay on the community fork. It works. For schemas that aren't changing much and don't need new features, there's no urgency. But you're depending on volunteer bandwidth to keep pace with PostgreSQL 18, Supabase schema changes, and whatever breaks in Prisma 7. If you hit a bug that needs a real fix, "wait for a maintainer" is the answer.

Rewrite to plain Faker + manual seeders. Strip the dependency, drop back to @faker-js/faker plus hand-written INSERT statements or ORM factories. This is the "return to 2019" path. You own every line of it, which is good, but you also own every foreign key you have to wire by hand — which is exactly the pain @snaplet/seed was built to eliminate. For anything beyond 10 tables, this trade is rarely worth it.

Move to a framework's native seeder. If you use Prisma, Laravel, or EF Core, their built-in seed mechanisms exist. But the core constraint is identical to the Faker rewrite: data is defined manually, relationships are wired by hand, and every migration breaks something. ORM seeders don't scale past a weekly-migration cadence either.

Migrate to an active schema-aware alternative. Keep the "tool reads your schema, produces valid related data" property, ditch the dead dependency. This is the path that preserves the reason you adopted Snaplet in the first place. Seedfast is the closest functional match for @snaplet/seed users: CLI-first, schema-aware, actively developed, free tier. Tonic Fabricate is another option, priced for enterprise procurement. Greenmask targets anonymization more than generation, so it fits the @snaplet/snapshot use case, not the seed-generation one.

The rest of this guide walks through the fourth path using Seedfast, since it's the one most similar in shape to what @snaplet/seed was doing.

What a Snaplet Seed alternative has to replace#

Before the migration steps, it's worth being precise about what @snaplet/seed actually did so you know what to look for in any replacement.

Schema introspection. Snaplet Seed ran a codegen step (npx @snaplet/seed generate) that read your Postgres schema and produced a typed client. That client knew your tables, columns, foreign keys, and constraints at compile time.

A typed scope API. You wrote TypeScript against that generated client — await seed.users((x) => x(10)) — and Snaplet Seed would create ten users with whatever dependencies they needed, deterministically via Copycat.

Foreign-key resolution. The generated client understood dependencies. Ask for orders, and it created the users, products, and line items needed to make those orders valid.

AI plugin. Later versions added an ai plugin that could pull realistic values from OpenAI or Groq for columns that Copycat couldn't infer well (addresses, product names, free-text fields).

A CI story. Running tsx seed.ts in a pipeline after migrations was the default.

A real replacement has to cover all five. Swapping in Faker handles the value generation but drops FK resolution. Swapping in Tonic handles schema awareness but moves you into enterprise procurement. The criterion is whether the replacement does the full job in a single tool.

Seedfast: the same job in a different shape#

Seedfast is an AI-powered CLI for PostgreSQL that reads your live schema on every run and generates connected test data. The functional overlap with @snaplet/seed is substantial; the interface is different.

# Install (Homebrew or npm)
brew install argon-it/tap/seedfast
# or: npm install -g seedfast

# Authenticate and point it at your database
seedfast login
seedfast connect

# Generate data
seedfast seed --scope "10 users with 5 orders each across 3 product categories"

There's no codegen step. No seed.config.ts. No typed client to regenerate after every migration. The scope is a plain-English sentence instead of a TypeScript fluent API. When the schema changes, the next seedfast seed picks it up — no artifact to keep in sync.

The mechanism is different from Snaplet too. Where @snaplet/seed relied on Copycat for deterministic value generation plus an optional AI plugin, Seedfast uses AI as the primary planner: it reads the schema, infers the semantic intent from your --scope, and generates data that fits both the constraints and the domain. That makes the output more contextually realistic for domain-heavy schemas (healthcare, e-commerce, fintech). The trade-off is determinism: where Copycat reproduced the same dataset from the same inputs by design, Seedfast generates fresh data on each run. Teams that need repeatable datasets for e2e baselines usually pin a specific run's output (via pg_dump or the dashboard) rather than regenerating.

@snaplet/seed vs Seedfast: side-by-side#

Capability@snaplet/seed (community fork)Seedfast
Installationnpm install @snaplet/seed + codegennpm install -g seedfast or Homebrew binary
Config fileseed.config.ts requiredNone
Schema awarenessGenerated client (regenerated per schema change)Live read on every run
Scope APITyped TypeScript fluent APIPlain-English --scope flag
FK resolutionYesYes
Realistic valuesCopycat (deterministic) + optional AI pluginAI-native, domain-aware
DeterminismSeed-based, deterministic by defaultFresh generation per run (pin via pg_dump)
ORM couplingTypeScript-first; best fit for Node/PrismaORM-agnostic (reads Postgres directly)
CI/CD integrationtsx seed.ts + env varsnpx seedfast seed --scope "…" --output json + SEEDFAST_API_KEY
Maintenance statusCommunity fork, limited release activity since July 2024Actively maintained (see public getting started guide and release cadence)
MCP integrationNoYes (AI assistants can seed via MCP server)
PostgreSQL focusYes (with experimental SQLite/MySQL paths that shipped before shutdown)Yes (MySQL planned)
Runtime dependencyLocal-only (no network after codegen)Requires Seedfast account + internet connection (planning runs server-side)

The rows to pay attention to depend on what you originally liked about Snaplet. If it was the typed TypeScript API, Seedfast's plain-English scope is a genuine change in shape, and you give something up. If it was "the tool reads my schema and handles FKs", Seedfast is a direct match with a different surface.

Evaluating the switch? Seedfast's free tier is enough for a full migration spike — install, connect your dev database, run one seed. Get started in under five minutes.

Migration guide from @snaplet/seed to Seedfast#

Below is a concrete migration for a typical @snaplet/seed setup. Most teams finish this in under an hour.

Step 1. Catalog what the current seed file does#

Before replacing anything, read your seed.ts and write down the scenarios in plain English. Something like:

  • 20 users across 3 teams with admin/editor/viewer role distribution
  • 50 products in 5 categories with realistic prices
  • 100 orders, average 2 line items each, over the last 6 months
  • One test admin user with a known email for e2e login

This list becomes your Seedfast scopes. The easier this list is to write, the closer your current seed.ts is to something a --scope string can express directly.

Step 2. Handle the table-reset step separately#

Snaplet's seed.$resetDatabase() truncated tables as part of the seed call. Seedfast does not do this automatically — it writes into whatever state the database is in. If your old workflow relied on $resetDatabase(), decide where that responsibility now lives:

# Simple path: truncate before seeding
psql "$DATABASE_URL" -c "TRUNCATE users, teams, products, orders RESTART IDENTITY CASCADE;"

# Or recreate the schema (CI / ephemeral DBs):
npx prisma migrate reset --force --skip-seed  # Prisma example

This is typically one line in a CI workflow or a make target. It is not a limitation so much as a separation of concerns — the seeder generates, the migration tool handles schema state.

Step 3. Remove the Snaplet dependency#

npm uninstall @snaplet/seed @snaplet/copycat
rm -f seed.ts seed.config.ts
rm -rf .snaplet

If your CI scripts reference @snaplet/seed generate or tsx seed.ts, note them — Step 5 replaces these.

Step 4. Install and connect Seedfast#

# Homebrew (macOS/Linux)
brew install argon-it/tap/seedfast

# Or npm (all platforms)
npm install -g seedfast

seedfast login      # authenticate (one-time per machine)
seedfast connect    # paste your Postgres connection string

Seedfast reads the current schema from whatever database you connect to. There is no per-project config file; credentials are stored in the OS keychain. See the getting started guide for the full first-run walkthrough.

Step 5. Replace the seed calls#

Where seed.ts had:

// Old @snaplet/seed
import { createSeedClient } from "@snaplet/seed";

const seed = await createSeedClient();
await seed.$resetDatabase();
await seed.teams((x) => x(3));
await seed.users((x) => x(20, (ctx) => ({
  role: ctx.index < 2 ? "admin" : "editor",
})));
await seed.products((x) => x(50));
await seed.orders((x) => x(100));

The Seedfast equivalent is a single command:

seedfast seed --scope "3 teams. 20 users across those teams, 2 admins and the rest editors. 50 products. 100 orders, each with 2-5 line items, spread over the last 6 months."

When --scope is provided, Seedfast auto-approves the generated plan without prompting — which is what you want for scripts and CI.

Where seed.ts used hierarchical contexts (ctx.connect, per-record overrides like "this specific user owns this specific order"), the plain-English scope is not an exact substitute. The pattern that works: let Seedfast generate the bulk data, then insert prescriptive fixtures with a short SQL follow-up.

Concrete example. Snaplet wiring a known admin user to a specific team:

// Old @snaplet/seed — prescriptive wiring via ctx.connect
await seed.teams((x) => x(1, { name: "QA Team" }));
await seed.users((x) => x(1, (ctx) => ({
  email: "admin@example.com",
  team: ctx.teams[0],
  role: "admin",
})));

Seedfast equivalent — generate the background, then pin the fixture:

seedfast seed --scope "3 teams with 5 users each"
-- Pin the e2e login fixture afterward
INSERT INTO teams (id, name) VALUES ('qa-team-1', 'QA Team');
INSERT INTO users (id, email, team_id, role)
VALUES ('admin-1', 'admin@example.com', 'qa-team-1', 'admin');

Two files instead of one, but both are short and the SQL file is stable — it only changes when your e2e login expectations change, not when the schema shifts. Most teams find that 90% of what seed.ts did was generic "make plausible data" (perfect for --scope) and 10% was "this exact record for this exact test" (the SQL file). If that ratio is different for your codebase — say, heavy ctx.connect use across many tables — map it with the checklist from Step 1 before committing to the migration.

If the plain-English scope describes more tables or rows than your current Seedfast plan allows, the CLI exits with a non-zero status in non-interactive mode. Split the scope into multiple seedfast seed calls, or narrow the scope to the tables you need.

Step 6. Rewire CI/CD#

Wherever your pipeline ran tsx seed.ts, replace it with seedfast. The shape is identical — a single step after migrations, before tests:

# Before
- name: Seed database
  run: |
    npx @snaplet/seed generate
    tsx seed.ts
  env:
    DATABASE_URL: ${{ secrets.DATABASE_URL }}

# After
- name: Seed database
  run: npx seedfast seed --scope "e2e baseline: 3 teams, 20 users, 100 orders" --output json
  env:
    SEEDFAST_API_KEY: ${{ secrets.SEEDFAST_API_KEY }}
    SEEDFAST_DSN: ${{ secrets.SEEDFAST_DSN }}

The CI/CD CLI uses SEEDFAST_API_KEY for non-interactive auth and reads the connection string from SEEDFAST_DSN (or DATABASE_URL as a fallback). --output json is machine-readable for pipeline validation. The CI/CD database seeding docs cover GitLab CI, CircleCI, and ephemeral preview environments in more detail.

Step 7. Verify#

Run your integration tests against the new dataset. The shape of data Seedfast produces is similar in spirit to what @snaplet/seed produced (valid, connected, schema-respecting) but the exact values will differ. Tests that hard-coded Copycat-generated values — an email like alice.smith@example.com that happened to be deterministic — need to be updated to query what the seed produced rather than assume specific literals. This is a one-time fix and usually reveals brittle tests worth cleaning up anyway.

What you lose and what you gain#

Being honest about the trade-offs makes the migration predictable.

You lose the typed TypeScript fluent API. If your team genuinely enjoyed writing await seed.users((x) => x(10, ctx => ({ team_id: ctx.teams[0].id }))) and getting IDE autocomplete for the entire schema, that muscle memory doesn't transfer — Seedfast's interface is a CLI and a scope string, not a code API. You trade compile-time schema access for "no code artifact to maintain."

You lose default determinism. Copycat produced the same fake name from the same input, every time. Seedfast generates fresh data on each run; if you need a repeatable dataset for an e2e baseline, pin the generated output with a pg_dump and restore from that in CI. It's an extra step, not a blocker — but it's an extra step.

You gain low-maintenance schema awareness: no snaplet generate step that drifts from the real schema, no seed.config.ts to keep in sync, no "regenerate the client" step in onboarding docs. Seedfast reads the live schema on each run, so it tracks migrations automatically. Teams still maintain the scope strings they care about, and the short SQL file of prescriptive fixtures mentioned in Step 5 — but no typed client, no codegen artifact. See why static seed files break for the broader case.

You gain domain-aware data generation. Copycat produced lexically plausible strings. AI-driven generation produces strings that match the semantic scope — an electronics store gets electronics product names, a healthcare schema gets medication-adjacent fields. For demos and staging environments, the difference shows up immediately.

You gain an actively maintained dependency. When Postgres 18 ships a new feature, when Supabase changes a branching default, when Prisma 7 reshuffles the seed command — those fixes land. That was the original contract with commercial Snaplet, and it's the contract that disappeared in August 2024.

You gain MCP integration. If your team uses Claude Code, Claude Desktop, or any other MCP-aware AI assistant, Seedfast ships an MCP server that lets the assistant seed databases inside the conversation. The MCP setup guide covers the config. Snaplet never shipped this and probably never would have — it predates the protocol.

Frequently asked questions#

Is Snaplet Seed actually dead, or just "community maintained"?#

It's the second, but in a way that behaves like the first for most teams. The code runs; the GitHub issues get some attention. But the last substantive release is from July 2024 and there is no visible roadmap. Depending on that for a critical CI step means accepting that unplanned fixes will take as long as a volunteer has free time. Teams with SLAs or frequent schema changes usually find that ceiling low.

Can I keep using @snaplet/seed and ignore this?#

Yes, as long as your schema is stable, your Postgres version isn't changing, and you don't hit a bug that needs a real fix. The risk is that all three of those are moving targets. Most teams that open @snaplet/seed issues in 2026 find a thread waiting for a maintainer. That's a fine situation until it isn't.

Does Seedfast work with Supabase the way @snaplet/seed did?#

Yes. Seedfast reads PostgreSQL directly, so it works against any Supabase project — including branching. You can seed a parent branch once and have preview branches inherit the data. Unlike @snaplet/seed, there's no Supabase-specific adapter needed; it's just a Postgres connection string. The same shape works for any hosted Postgres (Neon, RDS, self-hosted).

Is Seedfast deterministic? My CI needs the same data every run.#

By default, no — scope strings produce fresh data on each run. That is a genuine difference from Copycat-backed Snaplet. The practical workaround most teams adopt is to run the seed once, pg_dump the result, and restore from that dump in CI. This gives you an identical dataset every pipeline run and also makes CI faster (restoring a dump beats regenerating). If determinism is a hard requirement for your workflow, factor that into your evaluation.

Isn't Basecut already the "Snaplet alternative"? How is Seedfast different?#

Basecut is a replacement for @snaplet/snapshot — the product that took anonymized copies of your production database. Seedfast is a replacement for @snaplet/seed — the product that generated fresh data from the schema. If you were snapshotting prod and masking it, Basecut is the direct path. If you were writing seed.ts to populate empty dev and CI databases, Seedfast is. They solve adjacent but distinct problems, which is why both can rank for the same broad query.

What about anonymization? @snaplet/snapshot did that and Seedfast doesn't.#

Correct — Seedfast generates synthetic data; it doesn't anonymize production data. If your workflow was "snapshot prod, mask it, ship it to dev," Seedfast is the wrong shape. Look at Greenmask or Tonic Structural for that. Most teams using @snaplet/seed were on the generation path, not the snapshot path — if you fall in that category, Seedfast is the direct swap.

Can I run both in parallel during migration?#

Yes, and it's often the right approach for larger codebases. Keep @snaplet/seed running in CI while you stand up Seedfast on a branch. Compare the datasets they produce against a known test suite. Cut over once the tests pass against the Seedfast-generated data. The cost is one extra CI step for a week or two.

Migrate in one command#

If your seed.ts is under 200 lines, the migration is usually under an hour: uninstall the package, install Seedfast, rewrite the scope as a sentence, update one CI step. If it's longer, break it up: use a scope string for the 90% generic data and a small SQL insert or ORM call for the 10% that needs specific records. Either way, there's no code artifact that has to stay in lockstep with the schema.

Free tier. First seeded database in under five minutes. Get started with Seedfast or read the installation walkthrough. If you're still comparing options, the data seeding tools comparison covers the broader field: Faker, ORM seeders, enterprise anonymization, and schema-aware generators, so you can pick what fits your situation rather than what matches a single blog post.