I’ve been thinking a lot about what separates hobby projects from production-ready systems. Recently, while scaling a GraphQL API that started showing performance issues under load, I realized how crucial it is to build with scalability from day one. The combination of NestJS, Prisma, and Redis has become my go-to stack for creating robust GraphQL APIs that can handle real-world traffic.
Have you ever noticed how quickly a simple API can become complex when you add authentication, caching, and real-time features?
Let me walk you through building a production-ready GraphQL API. We’ll start with the foundation. NestJS provides the perfect structure for large applications, while Prisma offers type-safe database operations. Redis handles caching and real-time features efficiently.
Here’s how I set up the core Redis service with advanced caching patterns:
@Injectable()
export class RedisService {
constructor(@Inject(REDIS_CLIENT) private readonly redis: Redis) {}
async get<T>(key: string): Promise<T | null> {
const value = await this.redis.get(key);
return value ? JSON.parse(value) : null;
}
async set(key: string, value: any, ttl?: number): Promise<void> {
const serializedValue = JSON.stringify(value);
if (ttl) {
await this.redis.setex(key, ttl, serializedValue);
} else {
await this.redis.set(key, serializedValue);
}
}
async del(key: string): Promise<void> {
await this.redis.del(key);
}
// Pattern-based deletion for cache invalidation
async deletePattern(pattern: string): Promise<void> {
const keys = await this.redis.keys(pattern);
if (keys.length > 0) {
await this.redis.del(...keys);
}
}
}
What happens when multiple users request the same data simultaneously? That’s where cache stampede protection comes in. I implement a simple mutex system to prevent multiple database hits:
async getWithFallback<T>(
key: string,
fallback: () => Promise<T>,
ttl: number = 300
): Promise<T> {
const cached = await this.get<T>(key);
if (cached) return cached;
const mutexKey = `mutex:${key}`;
const hasMutex = await this.redis.set(mutexKey, '1', 'PX', 5000, 'NX');
if (!hasMutex) {
// Wait and retry if another process is already computing
await new Promise(resolve => setTimeout(resolve, 100));
return this.getWithFallback(key, fallback, ttl);
}
try {
const freshData = await fallback();
await this.set(key, freshData, ttl);
return freshData;
} finally {
await this.del(mutexKey);
}
}
The real power comes when we integrate this caching with GraphQL resolvers. Here’s how I handle cached queries for user data:
@Resolver(() => User)
export class UsersResolver {
constructor(
private usersService: UsersService,
private redisService: RedisService
) {}
@Query(() => User)
async user(@Args('id') id: string): Promise<User> {
const cacheKey = `user:${id}`;
return this.redisService.getWithFallback(
cacheKey,
() => this.usersService.findById(id),
600 // 10 minutes
);
}
}
But what about mutations? They need to invalidate relevant cache entries. Here’s my approach:
@Mutation(() => Post)
async updatePost(
@Args('input') input: UpdatePostInput
): Promise<Post> {
const updatedPost = await this.postsService.update(input);
// Invalidate cache patterns
await this.redisService.deletePattern(`post:${input.id}*`);
await this.redisService.deletePattern('posts:list*');
return updatedPost;
}
Did you know that N+1 query problems can silently kill your API’s performance? That’s where DataLoader comes in. I create batch loading functions that work seamlessly with Prisma:
@Injectable()
export class UserLoader {
constructor(private prisma: PrismaService) {}
createUsersLoader() {
return new DataLoader<string, User>(async (userIds) => {
const users = await this.prisma.user.findMany({
where: { id: { in: [...userIds] } }
});
const userMap = new Map(users.map(user => [user.id, user]));
return userIds.map(id => userMap.get(id));
});
}
}
For real-time features, GraphQL subscriptions with Redis pub/sub enable scalable WebSocket communication:
@Subscription(() => Post, {
filter: (payload, variables) =>
payload.postAdded.authorId === variables.userId
})
postAdded(@Args('userId') userId: string) {
return pubSub.asyncIterator('POST_ADDED');
}
Error handling is crucial in production. I create custom filters that provide consistent error responses:
@Catch()
export class GlobalExceptionFilter implements ExceptionFilter {
catch(exception: unknown, host: ArgumentsHost) {
const ctx = host.switchToHttp();
const response = ctx.getResponse();
if (exception instanceof Prisma.PrismaClientKnownRequestError) {
// Handle database errors gracefully
return response.status(400).json({
error: 'Database operation failed',
code: exception.code
});
}
// Default error response
response.status(500).json({
error: 'Internal server error'
});
}
}
Testing becomes straightforward with this architecture. I use dependency injection to mock Redis and database calls:
describe('UsersResolver', () => {
let resolver: UsersResolver;
let mockRedisService: Partial<RedisService>;
beforeEach(async () => {
mockRedisService = {
getWithFallback: jest.fn()
};
const module: TestingModule = await Test.createTestingModule({
providers: [
UsersResolver,
{ provide: RedisService, useValue: mockRedisService }
]
}).compile();
resolver = module.get<UsersResolver>(UsersResolver);
});
});
Deployment with Docker ensures consistency across environments. Here’s a simple Dockerfile that works well:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY dist ./dist
COPY prisma ./prisma
RUN npx prisma generate
EXPOSE 3000
CMD ["node", "dist/main.js"]
The beauty of this setup is how each piece complements the others. NestJS provides the structure, Prisma ensures type safety, and Redis handles performance. Together, they create a foundation that can scale with your application’s needs.
What challenges have you faced when building GraphQL APIs? I’d love to hear about your experiences and solutions. If this approach resonates with you, please share this article with your team or colleagues who might benefit from these patterns. Let’s continue the conversation in the comments below!