I’ve been thinking about modern API development recently. The shift toward GraphQL offers real benefits, but building production-ready systems requires careful choices. That’s why I want to share a robust approach combining NestJS, Prisma, and Redis. These tools create efficient, maintainable APIs that scale. Let’s explore how they work together.
Starting a new project? Here’s how I set up the foundation:
nest new graphql-api-tutorial
npm install @nestjs/graphql prisma @prisma/client redis @nestjs/redis
My project structure organizes functionality:
src/
├── auth/
├── common/
├── database/
├── modules/
└── redis/
Core configuration connects everything:
@Module({
imports: [
GraphQLModule.forRoot({
autoSchemaFile: 'schema.gql',
subscriptions: { 'graphql-ws': true }
}),
PrismaModule,
RedisModule
]
})
export class AppModule {}
Database design comes next. Prisma’s schema language clearly defines models:
model User {
id String @id @default(cuid())
email String @unique
posts Post[]
}
model Post {
id String @id @default(cuid()
title String
author User @relation(fields: [authorId], references: [id])
authorId String
}
The Prisma service handles connections efficiently:
@Injectable()
export class PrismaService extends PrismaClient {
async onModuleInit() {
await this.$connect();
this.$use(async (params, next) => {
const start = Date.now();
const result = await next(params);
console.log(`Query took ${Date.now() - start}ms`);
return result;
});
}
}
Defining GraphQL types ensures clean contracts. Notice how relationships map:
@ObjectType()
export class User {
@Field(() => ID)
id: string;
@Field(() => [Post])
posts: Post[];
}
Input validation keeps data clean:
@InputType()
export class CreatePostInput {
@Field()
@IsString()
@MinLength(5)
title: string;
}
Resolvers become the bridge between schema and data. How do we make them efficient?
@Resolver(() => User)
export class UserResolver {
constructor(private prisma: PrismaService) {}
@Query(() => User)
async user(@Args('id') id: string) {
return this.prisma.user.findUnique({ where: { id } });
}
}
Caching frequently accessed data boosts performance significantly. Redis integration is straightforward:
@Injectable()
export class RedisService {
constructor(@InjectRedis() private readonly redis: Redis) {}
async get(key: string): Promise<string> {
return this.redis.get(key);
}
async set(key: string, value: string, ttl: number) {
await this.redis.set(key, value, 'EX', ttl);
}
}
Applying caching in resolvers prevents database overload:
async user(@Args('id') id: string) {
const cached = await this.redisService.get(`user:${id}`);
if (cached) return JSON.parse(cached);
const user = await this.prisma.user.findUnique({ where: { id } });
await this.redisService.set(`user:${id}`, JSON.stringify(user), 300);
return user;
}
Authentication secures our API. JWT strategies integrate cleanly:
@Injectable()
export class JwtStrategy extends PassportStrategy(Strategy) {
constructor() {
super({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
secretOrKey: process.env.JWT_SECRET
});
}
async validate(payload: any) {
return { userId: payload.sub, username: payload.username };
}
}
Real-time subscriptions add powerful capabilities. Setup is surprisingly simple:
@Subscription(() => Post, {
resolve: (payload) => payload.newPost
})
newPost() {
return pubSub.asyncIterator('NEW_POST');
}
Performance optimization matters at scale. The N+1 problem? DataLoader solves it elegantly:
@Injectable()
export class PostsLoader {
constructor(private prisma: PrismaService) {}
createLoader() {
return new DataLoader<string, Post[]>(async (userIds) => {
const posts = await this.prisma.post.findMany({
where: { authorId: { in: [...userIds] } }
});
return userIds.map(id => posts.filter(p => p.authorId === id));
});
}
}
Testing ensures reliability. I focus on key areas:
describe('UserResolver', () => {
let resolver: UserResolver;
beforeEach(async () => {
const module = await Test.createTestingModule({
providers: [UserResolver, PrismaService]
}).compile();
resolver = module.get<UserResolver>(UserResolver);
});
it('returns user by id', async () => {
const user = await resolver.user('user1');
expect(user.email).toEqual('[email protected]');
});
});
Deployment requires attention to environment specifics. My Dockerfile typically includes:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/main.js"]
Monitoring in production catches issues early. I combine logging with tools like Prometheus:
const counter = new Prom.Counter({
name: 'graphql_requests_total',
help: 'Total GraphQL requests'
});
app.use((req, res, next) => {
if (req.path === '/graphql') counter.inc();
next();
});
This stack delivers production-ready GraphQL APIs efficiently. Each tool solves specific challenges while working harmoniously together. What optimizations have you found most effective in your projects?
If this guide helped clarify GraphQL API construction, consider sharing it with others facing similar challenges. Your experiences and insights matter - what deployment strategies have worked best for you? Leave your thoughts below.