Why SQLite Might Be All You Need
Before you spin up Postgres, consider whether SQLite could handle your use case. For many projects, it can — and the simplicity is worth it.
Every new project seems to start the same way: spin up a Postgres container, configure connection pooling, set up migrations, add a health check endpoint. It’s become muscle memory. But for a surprising number of applications, all that infrastructure is solving a problem you don’t have.
SQLite is not a toy
Let’s clear this up first. SQLite handles:
- Terabytes of data (the maximum database size is 281 TB)
- Millions of rows with proper indexing
- Concurrent reads with WAL mode enabled
- ACID transactions with stronger durability guarantees than most client-server databases
It powers Firefox, Android, iOS, Airbus flight software, and roughly a trillion deployed instances worldwide. It’s the most widely deployed database engine in existence.
The write concurrency question
“But what about concurrent writes?” This is the real limitation, and it’s worth understanding precisely. SQLite allows one writer at a time. In WAL mode, readers don’t block writers and writers don’t block readers, but two simultaneous writes will serialize.
For most web applications — especially content sites, internal tools, dashboards, and APIs with moderate traffic — this is fine. You’re not building Twitter. If your app handles fewer than a few hundred write requests per second, SQLite won’t be your bottleneck.
The deployment advantage
Here’s where SQLite changes the game:
Your entire database is one file.
No database server to configure, monitor, or keep alive. No connection strings, no connection pool tuning, no “the database container ran out of memory at 3 AM” incidents.
Deploying means copying your application binary and your .db file. Backing up means copying that file (or using SQLite’s online backup API). Moving to a bigger server means… copying the file.
Practical setup for web apps
Modern SQLite for web applications looks like this:
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA foreign_keys = ON;
PRAGMA busy_timeout = 5000;
WAL mode enables concurrent reads during writes. busy_timeout tells SQLite to wait up to 5 seconds for a lock instead of failing immediately. These four lines solve 90% of “SQLite isn’t suitable for web apps” complaints.
In your application code, use a single long-lived connection for writes and a pool of read-only connections for queries:
import Database from 'better-sqlite3';
const writer = new Database('app.db');
writer.pragma('journal_mode = WAL');
// For read operations, you can open additional connections
const reader = new Database('app.db', { readonly: true });
When to reach for Postgres
SQLite isn’t always the answer. You should use a client-server database when you need:
- Multi-server writes. If your application runs on multiple servers that all need to write, SQLite on a single disk won’t work. (Though tools like LiteFS and Turso are changing this.)
- Advanced features. Full-text search with language-specific stemming, PostGIS for geospatial queries, LISTEN/NOTIFY for real-time events — Postgres has a richer feature set.
- Separate scaling. If your database needs to scale independently of your application server, a managed Postgres instance makes more sense.
Start simple
The best architecture is the simplest one that solves your problem. For many projects — prototypes, side projects, content sites, internal tools, mobile apps, and even moderate-traffic web applications — SQLite is that architecture.
You can always migrate to Postgres later if you outgrow it. But you might be surprised how far “one file” takes you.