Lately, I’ve been wrestling with scaling our GraphQL API at work. As our team grew, so did conflicts around schema ownership and deployment bottlenecks. That’s when Apollo Federation caught my eye—it promised a way to split our monolith while keeping a unified API surface. Today, I’ll share how we implemented it with NestJS, creating a robust gateway that handles millions of requests daily.
Let me show you our journey from a single GraphQL server to a federated architecture. We started by designing independent subgraphs: Users handled authentication, Products managed inventory, and Orders processed transactions. Each team now owns their service entirely. How did we ensure these pieces worked together seamlessly? The secret lies in Apollo Federation’s entity linking.
Here’s how we structured the Users service using NestJS:
// User entity with federation directives
@ObjectType()
@Directive('@key(fields: "id")')
@Entity()
export class User {
@PrimaryGeneratedColumn('uuid')
@Field(() => ID)
id: string;
@Field()
email: string;
// Password field excluded from GraphQL schema
password: string;
}
// Resolver with reference resolution
@Resolver(() => User)
export class UsersResolver {
@ResolveReference()
resolveReference(reference: { id: string }) {
return this.usersService.findOne(reference.id);
}
}
Notice the @key
directive? That’s our federation anchor—it declares this entity can be referenced across services. When the Orders service needs user data, it extends the type without touching the Users codebase:
# In Orders service schema
extend type User @key(fields: "id") {
id: ID! @external
orders: [Order!]!
}
This separation proved invaluable during Black Friday. Our Products team deployed inventory updates independently while Orders scaled horizontally. But what about the gateway stitching everything together? We configured it like this:
// Apollo Gateway setup
const gateway = new ApolloGateway({
supergraphSdl: new IntrospectAndCompose({
subgraphs: [
{ name: 'users', url: process.env.USERS_URL },
{ name: 'orders', url: process.env.ORDERS_URL },
{ name: 'products', url: process.env.PRODUCTS_URL }
]
})
});
Authentication became critical with distributed services. We implemented JWT verification at the gateway level using a custom Apollo plugin:
// Gateway auth plugin
const authPlugin = {
async requestDidStart() {
return {
async willSendResponse({ response }) {
const token = request.headers.authorization?.split(' ')[1];
if (token && !verifyToken(token)) {
response.http.status = 401;
response.errors = [new AuthenticationError('Invalid token')];
}
}
};
}
};
For caching, we layered Redis with request deduplication. The key insight? Cache at the gateway for cross-service queries and within subgraphs for service-specific data. Our cache hit rate jumped from 15% to 63%.
Monitoring required special attention. We integrated OpenTelemetry tracing across all services:
# Launching services with tracing
NODE_OPTIONS='--require @opentelemetry/auto-instrumentations-node/register' \
node dist/users-service/main.js
Deployment taught us hard lessons. Initially, we underestimated schema-checking. Now, we run composition checks in CI before every deploy:
npx rover supergraph compose --config ./supergraph.yaml
Common pitfalls? Schema conflicts top the list. We avoid them by:
- Using linters with federation rules
- Enforcing namespaces for custom types
- Running schema regression tests
Performance tuning mattered most. We optimized by:
- Batching reference resolutions
- Setting query timeouts
- Adding persisted queries
Here’s a batching optimization we applied:
// Batch user loading in Orders service
@Injectable()
export class UsersDataSource extends DataSource {
constructor() {
super();
this.initialize({ context: {}, cache: undefined });
}
async load(ids: string[]) {
return fetchUsersByIds(ids); // Single DB call
}
}
The result? Our 95th percentile latency dropped from 1.2s to 380ms. Error handling improved too—we standardized error codes across services and implemented circuit breakers.
What surprised me most was how federation changed our team dynamics. Frontend developers stopped asking “who owns this field?” because the gateway presented a unified schema. Backend teams moved faster with isolated test coverage. We even onboarded two new microservices in a week with zero gateway redeploys.
Ready to try this yourself? Start small: break one entity into a subgraph. Monitor performance before scaling. Remember, federation excels when you have clear domain boundaries—don’t force it where a monolith still works.
If this resonates with your scaling challenges, give it a try! Share your experience in the comments—I’d love to hear what works in your environment. Found this useful? Pass it along to your team!