As I've scaled GraphQL APIs over the years, I've repeatedly faced performance bottlenecks that emerge when applications grow. Just last month, while optimizing a client's blog platform that was buckling under heavy traffic, I realized how crucial it is to get the foundation right from day one. That's why I want to share this practical approach combining NestJS, Prisma, and Redis - a stack that's helped me deliver responsive GraphQL APIs even under significant load. Let's build this together.
When starting a new NestJS GraphQL project, I always begin with a clear structure. Here's how I organize my codebase:
```bash
src/
├── auth/ # Authentication logic
├── common/ # Shared utilities
├── database/ # Prisma integration
├── modules/ # Feature modules
├── cache/ # Redis implementation
└── app.module.ts # Main configuration
The core of our GraphQL setup lives in app.module.ts
. I configure Apollo Server with error handling and schema generation:
GraphQLModule.forRoot<ApolloDriverConfig>({
driver: ApolloDriver,
autoSchemaFile: join(process.cwd(), 'src/schema.gql'),
playground: process.env.NODE_ENV === 'development',
context: ({ req, res }) => ({ req, res }),
formatError: (error) => ({
message: error.message,
code: error.extensions?.code,
path: error.path
})
})
Ever wonder why some GraphQL APIs return dates as numbers? I implement custom scalars for better type handling:
@Scalar('Date', () => Date)
export class DateScalar implements CustomScalar<number, Date> {
serialize(value: Date): number {
return value.getTime();
}
// ... parse methods
}
For database modeling, Prisma’s schema language keeps things declarative. Here’s how I structure blog relationships:
model Post {
id String @id @default(cuid())
title String
content String
author User @relation(fields: [authorId], references: [id])
authorId String
comments Comment[]
}
model Comment {
id String @id @default(cuid())
content String
post Post @relation(fields: [postId], references: [id])
postId String
}
Notice how relations are explicitly defined? This clarity prevents common data modeling mistakes. But what happens when you need to fetch a post with all its comments? That’s where resolver patterns come in.
In my user resolver, I keep database calls clean using Prisma’s fluent API:
@Resolver(() => User)
export class UsersResolver {
constructor(private prisma: DatabaseService) {}
@Query(() => [User])
async users(): Promise<User[]> {
return this.prisma.user.findMany();
}
}
Simple enough, right? But when we add nested queries, performance can degrade rapidly. That’s where Redis enters the picture. I create a caching service that wraps Redis operations:
@Injectable()
export class CacheService {
constructor(private readonly redis: Redis) {}
async get(key: string): Promise<string | null> {
return this.redis.get(key);
}
async set(key: string, value: string, ttl = 300): Promise<void> {
await this.redis.set(key, value, 'EX', ttl);
}
}
Then in my posts service, I add caching logic before hitting the database:
async findPostById(id: string): Promise<Post> {
const cached = await this.cacheService.get(`post:${id}`);
if (cached) return JSON.parse(cached);
const post = await this.prisma.post.findUnique({ where: { id } });
await this.cacheService.set(`post:${id}`, JSON.stringify(post));
return post;
}
But caching alone doesn’t solve all problems. When fetching a user’s posts with comments, we risk the N+1 query problem. How do we prevent this? DataLoader batches our database requests:
@Injectable()
export class PostsLoader {
constructor(private prisma: DatabaseService) {}
createLoader(): DataLoader<string, Post[]> {
return new DataLoader(async (userIds) => {
const posts = await this.prisma.post.findMany({
where: { authorId: { in: [...userIds] } }
});
return userIds.map(id => posts.filter(p => p.authorId === id));
});
}
}
For security, I implement JWT authentication using Passport strategies. My auth guard protects resolvers:
@Injectable()
export class JwtAuthGuard extends AuthGuard('jwt') {
canActivate(context: ExecutionContext) {
// Custom validation logic
return super.canActivate(context);
}
}
@UseGuards(JwtAuthGuard)
@Mutation(() => Post)
async createPost(@Args('input') input: CreatePostInput) {
// Protected mutation
}
Testing is non-negotiable in production systems. I validate resolver behavior with integration tests:
describe('PostsResolver', () => {
let resolver: PostsResolver;
beforeEach(async () => {
const module = await Test.createTestingModule({
providers: [PostsResolver, PostsService]
}).compile();
resolver = module.get<PostsResolver>(PostsResolver);
});
it('returns empty array when no posts', async () => {
jest.spyOn(service, 'findAll').mockResolvedValue([]);
expect(await resolver.posts()).toEqual([]);
});
});
Before deployment, I optimize performance with:
- Query complexity analysis
- Persistent Redis connections
- Prisma middleware for logging
- Automated schema stitching
For production monitoring, I combine:
- Health checks with
@nestjs/terminus
- Metric collection using Prometheus
- Distributed tracing via OpenTelemetry
When deploying, I containerize with Docker and orchestrate via Kubernetes. The key is maintaining stateless services that scale horizontally.
This architecture has served me well across multiple production systems. The combination of NestJS’s structure, Prisma’s type safety, and Redis’s speed creates a robust foundation. What performance challenges have you faced with GraphQL? Share your experiences below - I’d love to hear what solutions you’ve implemented. If this approach helped you, please like and share this with other developers facing similar scaling challenges.