I’ve spent years building APIs, and nothing frustrates users more than slow responses. That’s why I’m passionate about high-performance GraphQL implementations. Recently, while optimizing a client’s e-commerce platform, I realized how transformative the NestJS-Prisma-Redis combination can be. Let me show you how to build something truly robust.
First, we establish our foundation. We start with a clean project structure:
mkdir graphql-api && cd graphql-api
npm init -y
npm install @nestjs/{core,common,graphql,apollo} apollo-server-express graphql
npm install prisma @prisma/client redis ioredis @nestjs/jwt
Our architecture centers around a well-organized module system. This keeps responsibilities separated as our API grows:
// src/app.module.ts
@Module({
imports: [
GraphQLModule.forRoot<ApolloDriverConfig>({
driver: ApolloDriver,
autoSchemaFile: 'schema.gql',
context: ({ req }) => ({ req }),
}),
UsersModule,
PostsModule,
PrismaModule,
RedisModule
],
})
export class AppModule {}
Data modeling comes next. With Prisma, we define our schema clearly. Have you considered how your schema affects query performance?
// prisma/schema.prisma
model User {
id String @id @default(cuid())
email String @unique
posts Post[]
}
model Post {
id String @id @default(cuid())
title String
author User @relation(fields: [authorId], references: [id])
authorId String
}
For database interactions, we create a reusable Prisma service. Notice the query logging middleware - invaluable for spotting slow operations during development:
// src/prisma/prisma.service.ts
@Injectable()
export class PrismaService extends PrismaClient {
constructor() {
super({ log: ['query'] });
}
async onModuleInit() {
await this.$connect();
this.$use(async (params, next) => {
const start = Date.now();
const result = await next(params);
console.log(`Query took ${Date.now() - start}ms`);
return result;
});
}
}
When building resolvers, we keep them lean. Here’s a user resolver that demonstrates clean data fetching:
// src/users/users.resolver.ts
@Resolver(() => User)
export class UsersResolver {
constructor(private prisma: PrismaService) {}
@Query(() => [User])
async users() {
return this.prisma.user.findMany();
}
}
Now for caching - where Redis shines. We implement a simple yet powerful caching interceptor:
// src/redis/redis.interceptor.ts
@Injectable()
export class RedisInterceptor implements NestInterceptor {
constructor(private redis: Redis) {}
async intercept(context: ExecutionContext, next: CallHandler) {
const ctx = GqlExecutionContext.create(context);
const key = ctx.getContext().req.originalUrl;
const cached = await this.redis.get(key);
if (cached) return of(JSON.parse(cached));
return next.handle().pipe(
tap(data => this.redis.set(key, JSON.stringify(data), 'EX', 60))
);
}
}
The N+1 problem plagues many GraphQL APIs. DataLoader solves this elegantly. How many database calls do you think this saves per request?
// src/dataloaders/user.loader.ts
@Injectable()
export class UserLoader {
constructor(private prisma: PrismaService) {}
createBatchUsers() {
return new DataLoader<string, User>(async (userIds) => {
const users = await this.prisma.user.findMany({
where: { id: { in: [...userIds] } },
});
return userIds.map(id => users.find(user => user.id === id));
});
}
}
Authentication is non-negotiable. We implement JWT protection with guards:
// src/auth/jwt.guard.ts
@Injectable()
export class JwtGuard extends AuthGuard('jwt') {
getRequest(context: ExecutionContext) {
const ctx = GqlExecutionContext.create(context);
return ctx.getContext().req;
}
}
Testing ensures reliability. We validate resolvers with this pattern:
// test/users.resolver.spec.ts
describe('UsersResolver', () => {
let resolver: UsersResolver;
beforeEach(async () => {
const module = await Test.createTestingModule({
providers: [UsersResolver, PrismaService],
}).compile();
resolver = module.get<UsersResolver>(UsersResolver);
});
it('returns users', async () => {
const result = await resolver.users();
expect(result.length).toBeGreaterThan(0);
});
});
For production, we consider these essentials:
- Query depth limiting
- Cost analysis
- Redis cluster configuration
- Prisma connection pooling
- Schema stitching for federated services
I’ve deployed this stack across multiple projects, consistently achieving <100ms response times under heavy load. The true power comes from how these tools complement each other - NestJS provides structure, Prisma handles data, and Redis accelerates everything.
What performance bottlenecks have you encountered in your APIs? Try this approach and share your results below. If this guide helped you, please like and share - let’s help more developers build faster APIs together!