Replace Mongoose with MongoDB Driver and Zod for Type-Safe Node.js Apps
Learn how to replace Mongoose with the MongoDB Node.js driver and Zod for better performance, runtime validation, and cleaner architecture.
I spent years building applications with Mongoose, and I loved its simplicity. But as my projects grew, I started feeling the weight of its abstraction. The middleware, the schema parsing on every document, the sometimes sluggish performance under high load—these began to bother me. I wanted more control. I wanted to be closer to the database. That’s when I decided to try the official MongoDB Node.js driver combined with Zod for runtime validation. And I haven’t looked back.
What if you could skip Mongoose entirely, yet still enjoy full runtime type safety? That’s exactly what this approach offers. You get the raw power and speed of the native driver, plus the clarity of Zod schemas that enforce your data shape both at compile time and at runtime. It’s a bit like building your own ODM, but only the parts you actually need. And once you understand the pattern, you’ll wonder why you ever accepted the extra overhead.
Let me show you how to set up a type-safe MongoDB connection manager that handles pooling, retries, and disconnections gracefully. I’ll also cover how to use Zod to validate every document coming in and going out, and how to implement a clean repository pattern that keeps your business logic separate from database calls. The examples come from a real project I built for an inventory management system, and I’ve included a few personal anecdotes along the way to make the journey feel less like a lecture.
Start by initializing a Node.js project with TypeScript, the MongoDB driver, and Zod. You’ll need a tsconfig that targets ES2022, strict mode, and a source folder structure. Here’s the key: don’t just instantiate a MongoClient every time you need a connection. That’s a recipe for race conditions and resource leaks. Instead, build a connection manager that keeps a singleton client and a reference to the database. I also added a guard so that if multiple parts of your code try to connect simultaneously, the second caller waits for the first to finish. This is a pattern I borrowed from database connection pools in other languages, and it works beautifully.
import { MongoClient, MongoClientOptions, Db } from "mongodb";
interface ConnectionConfig {
uri: string;
dbName: string;
options?: MongoClientOptions;
}
const state: {
client: MongoClient | null;
db: Db | null;
isConnecting: boolean;
} = { client: null, db: null, isConnecting: false };
export async function connectToDatabase(config: ConnectionConfig): Promise<Db> {
if (state.db && state.client) return state.db;
if (state.isConnecting) {
// wait for the ongoing connection
while (state.isConnecting) await new Promise(r => setTimeout(r, 100));
return state.db!;
}
state.isConnecting = true;
try {
const client = new MongoClient(config.uri, { maxPoolSize: 10, ...config.options });
await client.connect();
await client.db("admin").command({ ping: 1 });
state.client = client;
state.db = client.db(config.dbName);
state.isConnecting = false;
client.on("close", () => { state.client = null; state.db = null; });
return state.db;
} catch (error) {
state.isConnecting = false;
throw error;
}
}
Notice the retry logic? I left it out for brevity, but in production you should wrap the connection attempt in a retry loop with exponential backoff. How many times have you seen a service crash because a transient network issue killed the first connection? The native driver actually has built‑in retry logic for writes and reads, but your initial connection is not covered. So write your own.
Now, let’s talk about Zod. Why not just rely on TypeScript types? Because TypeScript disappears at runtime. If someone sends a malformed document from an old version of your microservice, or a rogue script injects bad data directly into the database, your TypeScript types won’t save you. Zod validates the shape of every object when you read it from the database and when you write it. It also works beautifully with TypeScript’s z.infer to keep the static types in sync.
Here’s a typical user schema:
import { z } from "zod";
import { ObjectId } from "mongodb";
const UserSchema = z.object({
_id: z.instanceof(ObjectId).optional(),
email: z.string().email(),
name: z.string().min(2).max(50),
age: z.number().int().positive().optional(),
createdAt: z.date().default(() => new Date()),
updatedAt: z.date().default(() => new Date()),
});
export type User = z.infer<typeof UserSchema>;
export const parseUser = (data: unknown): User => UserSchema.parse(data);
The parseUser function throws if the data doesn’t match, giving you a clear stack trace. You can also use safeParse to return errors gracefully. In my inventory project, I had a legacy collection that sometimes contained null values where numbers should be. Zod caught those immediately, and I was able to clean up the data without any downstream crashes.
Next, you need a typed collection wrapper. This is where the magic of generics comes in. Instead of calling db.collection('users') everywhere, you create a helper that returns a collection typed to your Zod‑inferred interface. Then every find, insert, update, or aggregate returns documents that are automatically validated.
import { Db, Collection, Document, Filter, OptionalUnlessRequiredId } from "mongodb";
import { User, UserSchema } from "../schemas/user.schema";
export function getTypedCollection<T extends Document>(
db: Db,
collectionName: string,
parse: (data: unknown) => T
): Collection<T> & { parse: (data: unknown) => T } {
const rawCollection = db.collection<T>(collectionName);
return Object.assign(rawCollection, { parse });
}
// usage
const usersCollection = getTypedCollection(db, "users", parseUser);
const user = await usersCollection.findOne({ email: "[email protected]" });
// user is already typed as User, and validated at runtime
But wait – findOne returns T | null. The validation should happen after the document is fetched. A safer approach is to override the find methods to always run the parse function on results. I’ll show you how to do that in the repository pattern.
The repository pattern is the natural home for database operations. You create a base repository that provides common CRUD methods, each one parsing inputs and outputs through your Zod schema. Then for each entity you extend it, adding custom queries. This keeps your application code clean and testable because you can mock the repository in unit tests.
import { Collection, Db, Filter, OptionalUnlessRequiredId, WithId } from "mongodb";
import { z } from "zod";
export class BaseRepository<T extends { _id?: any }> {
protected collection: Collection<T>;
private schema: z.ZodType<T>;
constructor(db: Db, collectionName: string, schema: z.ZodType<T>) {
this.collection = db.collection<T>(collectionName);
this.schema = schema;
}
async findOne(filter: Filter<T>): Promise<T | null> {
const doc = await this.collection.findOne(filter);
if (!doc) return null;
return this.schema.parse(doc);
}
async insertOne(data: T): Promise<string> {
const parsed = this.schema.parse(data); // validate before write
const result = await this.collection.insertOne(parsed as OptionalUnlessRequiredId<T>);
return result.insertedId.toString();
}
// more methods: find, update, aggregate, etc.
}
Now apply this to users:
export class UserRepository extends BaseRepository<User> {
constructor(db: Db) {
super(db, "users", UserSchema);
}
async findByEmail(email: string): Promise<User | null> {
return this.findOne({ email });
}
}
Aggregation pipelines are a big part of MongoDB’s power. With the native driver, you have full flexibility to compose complex stages. The problem is that the results of an aggregation are not automatically typed—they’re just generic documents. You can use Zod to parse each document in the result cursor. This adds a bit of overhead but guarantees type safety.
async aggregate<Output>(pipeline: object[], outputSchema: z.ZodType<Output>): Promise<Output[]> {
const cursor = this.collection.aggregate(pipeline);
const results: Output[] = [];
for await (const doc of cursor) {
results.push(outputSchema.parse(doc));
}
return results;
}
For example, to get all users above 18:
const adultSchema = z.object({ name: z.string(), age: z.number() });
const adults = await userRepository.aggregate([
{ $match: { age: { $gte: 18 } } },
{ $project: { name: 1, age: 1, _id: 0 } }
], adultSchema);
Testing is much easier without Mongoose. You can use mongodb-memory-server to spin up an ephemeral instance, seed it with test data, and then run your repository methods. Because you control everything via the native driver, there is no magic interfering with your tests. Here’s a quick jest test setup:
import { MongoMemoryServer } from "mongodb-memory-server";
import { connectToDatabase, getDb } from "../src/db/client";
let mongoServer: MongoMemoryServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const uri = mongoServer.getUri();
process.env.MONGO_URI = uri;
await connectToDatabase({ uri, dbName: "test" });
});
afterAll(async () => {
await (await getDb()).dropDatabase();
await mongoServer.stop();
});
test("should insert and retrieve a user", async () => {
const userRepo = new UserRepository(getDb());
const id = await userRepo.insertOne({ email: "[email protected]", name: "Test" } as User);
const user = await userRepo.findOne({ _id: id } as any);
expect(user).toBeDefined();
expect(user!.email).toBe("[email protected]");
});
You might ask: is it worth the extra code compared to Mongoose? If your application has high throughput, strict schema requirements, or you simply dislike fighting with Mongoose’s middleware, absolutely yes. The native driver is faster, lighter, and gives you full control. The only cost is writing a bit more infrastructure code, but once it’s done, you never have to guess what’s happening under the hood.
I still use Mongoose for rapid prototyping or projects with very simple data models. But for anything that demands performance and reliability, I switch to this stack. The combination of the native driver, Zod, and a custom repository pattern has saved me countless hours of debugging and allowed me to ship features faster because I understand exactly what each database call does.
Now it’s your turn. Try ripping out Mongoose from a small service and implementing this approach. You’ll be surprised how clean your code becomes. And if you run into any issues, leave a comment below — I’ll gladly help you debug.
If you found this article useful, please like it and share it with your team. And don’t forget to subscribe to the newsletter for more deep dives into database patterns and Node.js best practices. Your feedback helps me write better content, so let me know what you’d like to see next.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva