I was working on a production system last week when a simple database change caused a 15-minute outage. It was a humbling reminder: in modern applications, the database is the backbone, and changing it while it’s under load is risky business. This experience made me want to share a better way. If you’re running a Node.js service that can’t afford to go offline, this guide is for you. Let’s talk about changing your database without anyone noticing.
The goal is simple: modify your PostgreSQL schema while your application serves users, without errors or data loss. This isn’t about fancy tools; it’s about a methodical, safe process. We’ll use Kysely, a type-safe SQL query builder, to give us confidence in our changes. Why Kysely? It turns your SQL into TypeScript, catching mistakes before they reach production.
First, let’s set the stage. You’ll need a basic project structure. We’ll organize migrations in a way that makes them easy to run and track.
// src/database/db.ts - The core connection
import { Kysely, PostgresDialect } from 'kysely';
import { Pool } from 'pg';
import { Database } from './types';
const pool = new Pool({
host: process.env.DB_HOST,
database: process.env.DB_NAME,
// ... other config
});
export const db = new Kysely<Database>({
dialect: new PostgresDialect({ pool }),
});
The key to safe migrations is a pattern often called “expand and contract.” Think of it like rebuilding a bridge while traffic still flows. You don’t demolish the old bridge first. You build a new lane alongside it, direct traffic to use both, and only then remove the old lane.
This means every change happens in multiple, backward-compatible steps. The application must work with both the old and new schema at every point. Does that sound complicated? It’s simpler when you see it in action.
Let’s start with a common task: adding a new column to a table. The wrong way is to just run ALTER TABLE users ADD COLUMN email TEXT; and deploy new code that uses it. What happens if the migration runs slow and the new code starts first? Errors.
The right way has three phases. Phase one is the “expand” step. We add the column as nullable, with no default value. This is safe for the old application code; it just ignores the new column.
// Migration: 001_add_user_email.ts
import { Kysely } from 'kysely';
export async function up(db: Kysely<any>): Promise<void> {
await db.schema
.alterTable('users')
.addColumn('email', 'text')
.execute();
}
See the text type? No NOT NULL. No DEFAULT. This migration runs instantly and locks the table for a very short time. Your old v1.0 application keeps working. Now, you deploy your new v1.1 application code. This code should handle the column being present or not. It writes data to the new email column when possible, but doesn’t fail if for some reason the column isn’t there yet.
After v1.1 is fully deployed and running, you move to phase three: the “contract” step. Now you can safely add constraints because all running code knows about the column.
// Migration: 002_make_email_required.ts
import { Kysely } from 'kysely';
export async function up(db: Kysely<any>): Promise<void> {
// First, backfill any null values if necessary
await db.updateTable('users')
.set({ email: '[email protected]' })
.where('email', 'is', null)
.execute();
// Then add the NOT NULL constraint
await db.schema
.alterTable('users')
.alterColumn('email', (col) => col.setNotNull())
.execute();
}
What about renaming a column? This is a more dangerous operation. You might think ALTER TABLE RENAME COLUMN is easy, but it’s an exclusive lock that can block queries. The safer approach is to add a new column, copy data over time, and then remove the old one.
Here’s how you start. You add the new column, then create a database trigger to keep both columns in sync. This dual-write strategy ensures data consistency.
// Migration: Add new column and sync trigger
export async function up(db: Kysely<any>): Promise<void> {
// Add the new column
await db.schema
.alterTable('posts')
.addColumn('new_title', 'text')
.execute();
// Create a trigger to copy data on insert/update
await db.executeQuery(
sql`CREATE OR REPLACE FUNCTION sync_post_title()
RETURNS TRIGGER AS $$
BEGIN
NEW.new_title = NEW.old_title;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;`.compile(db)
);
await db.executeQuery(
sql`CREATE TRIGGER sync_titles
BEFORE INSERT OR UPDATE ON posts
FOR EACH ROW
EXECUTE FUNCTION sync_post_title();`.compile(db)
);
// Backfill existing data
await db.updateTable('posts')
.set({ new_title: sql`old_title` })
.execute();
}
Now, you update your application to write to both old_title and new_title. Once that’s live, you update it to read from new_title. After verifying everything works, you can remove the trigger and the old column. This process seems long, but each step is safe and reversible. Can you see how this prevents outages?
Managing these steps manually is error-prone. We need a migrator that can run them in order and keep track of what’s been executed. Kysely provides a Migrator class for this.
// src/database/migrator.ts
import { Migrator, FileMigrationProvider } from 'kysely';
import { promises as fs } from 'fs';
import path from 'path';
import { db } from './db';
const migrator = new Migrator({
db,
provider: new FileMigrationProvider({
fs,
path,
migrationFolder: './migrations',
}),
});
async function runMigrations() {
const { error, results } = await migrator.migrateToLatest();
if (error) {
console.error('Migration failed:', error);
process.exit(1);
}
// Log results...
}
A critical part of production safety is the rollback plan. What if migration 003 has a bug and you need to revert? Your migrations should be written with a down function that precisely reverses the up.
// Example of a reversible migration
export async function up(db: Kysely<any>): Promise<void> {
await db.schema
.createTable('profiles')
.addColumn('user_id', 'integer', (col) => col.references('users.id'))
.addColumn('bio', 'text')
.execute();
}
export async function down(db: Kysely<any>): Promise<void> {
await db.schema.dropTable('profiles').execute();
}
But rolling back a database is often harder than rolling back application code. Sometimes, a safer strategy is a forward fix. Instead of running down, you write a new migration 004 that fixes the problem introduced in 003. This avoids the complex state management of moving backward.
Performance matters. Long-running migrations can cause table locks, slowing your application. For backfilling large tables, batch the updates. Don’t update millions of rows in one transaction.
// Safe backfill in batches
async function backfillData(db: Kysely<any>) {
let lastId = 0;
const batchSize = 1000;
while (true) {
const rows = await db.selectFrom('users')
.select('id')
.where('email', 'is', null)
.where('id', '>', lastId)
.orderBy('id')
.limit(batchSize)
.execute();
if (rows.length === 0) break;
for (const row of rows) {
await db.updateTable('users')
.set({ email: `user-${row.id}@temp.com` })
.where('id', '=', row.id)
.execute();
}
lastId = rows[rows.length - 1].id;
// Small pause to reduce load
await new Promise(resolve => setTimeout(resolve, 100));
}
}
Integrating this into your CI/CD pipeline requires care. Never run migrations automatically on the main production database from a CI job. A good practice is to have a manual approval step or a pre-production environment that mirrors production’s size. Run the migrations there first and monitor for performance issues.
Finally, remember that zero-downtime migrations are a mindset. It’s about planning for safety, not speed. Every change is considered: “How can I do this in steps? What if this step fails?” This discipline saves you from 3 AM pages and unhappy users.
I hope this changes how you think about your next database change. Start small, practice the expand-and-contract pattern, and build confidence. Your users will thank you for the uninterrupted service. Did this approach make you think of a past migration differently?
If you found this guide helpful, please share it with a teammate who manages production databases. Have you tried a similar approach? What challenges did you face? Let me know in the comments.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva