I’ve been building APIs for years, and recently noticed how GraphQL adoption skyrockets when teams face complex data requirements. Why struggle with REST endpoints when clients can request exactly what they need? This realization led me to craft a production-ready GraphQL API using NestJS, Prisma, and Redis – and I’ll show you exactly how I did it. Stick around for the caching strategies that cut our query times by 70%!
First, I set up the foundation with NestJS’ CLI. The project structure organizes features vertically – users, products, orders – with shared utilities in common directories. Global pipes validate incoming requests, while interceptors handle logging. Notice how CORS protects our endpoints:
// Enable secure CORS
app.enableCors({
origin: process.env.ALLOWED_ORIGINS?.split(',') || ['http://localhost:3000'],
credentials: true,
});
For the database, Prisma’s declarative schema defines models with relations. Decimal types prevent floating-point errors in financial calculations, while indexes optimize frequent queries. How often have you seen currency mishandled in e-commerce systems?
model Product {
price Decimal @db.Decimal(10, 2) // Precise financial data
@@index([categoryId]) // Query optimization
}
GraphQL schema design follows NestJS’ code-first approach. We define types with decorators, then auto-generate SDL. Resolvers handle data fetching, but here’s where problems emerge. When fetching users with their orders, did you know a naive approach triggers N+1 queries?
// Resolver without optimization
@Resolver(() => User)
export class UsersResolver {
@Query(() => [User])
async users() {
return this.usersService.findAll();
}
}
Enter Redis caching. I implemented two cache layers: request-scoped for single queries and application-wide for shared data. This cache module intercepts database calls:
// Redis cache interceptor
@Injectable()
export class CacheInterceptor implements NestInterceptor {
constructor(private cacheManager: Cache) {}
async intercept(context: ExecutionContext, next: CallHandler) {
const key = this.getCacheKey(context);
const cached = await this.cacheManager.get(key);
if (cached) return of(cached);
return next.handle().pipe(
tap(data => this.cacheManager.set(key, data, 300)) // 5min TTL
);
}
}
For N+1 issues, DataLoader batches requests. Instead of 100 separate product queries for an order list, we make one batch request. Notice the difference in SQL logs:
// Order resolver with DataLoader
@ResolveField('products', () => [Product])
async getOrderProducts(
@Parent() order: Order,
@Context() { loaders }: GraphQLContext
) {
return loaders.productsLoader.loadMany(
order.items.map(item => item.productId)
);
}
Authentication uses passport-jwt with GraphQL context injection. Guards protect resolvers based on user roles. But how do we handle real-time updates? Subscriptions via WebSockets notify clients about order status changes:
// Order status subscription
@Subscription(() => Order, {
filter: (payload, variables) =>
payload.orderUpdated.userId === variables.userId
})
orderUpdated(@Args('userId') userId: string) {
return pubSub.asyncIterator('ORDER_UPDATED');
}
Performance monitoring proved crucial. We added Apollo tracing to identify slow resolvers. One complex product search dropped from 2.3s to 140ms after adding composite indexes and Redis caching. Ever wonder how much latency stems from serialization? We optimized DTO transformations.
For deployment, Docker containers ensure consistency. The compose file includes Postgres, Redis, and our NestJS app with proper network isolation. Logging uses Morgan with custom formats piped to CloudWatch.
Testing followed a pyramid approach: unit tests for services, integration for APIs, and load testing with K6. Mocking Redis and Prisma prevented test flakiness. Remember to always test failure modes – what happens when Redis drops?
I’ve shared the patterns that scaled to handle 15,000 RPM in our production system. What optimization strategies have you tried? Share your experiences below – if this helped you, pass it along to another developer facing similar challenges!