I’ve been thinking a lot about how modern applications handle increasing loads while maintaining responsiveness. This challenge led me to explore event-driven architectures - a powerful approach where services communicate through events rather than direct requests. Today, I’ll walk you through building a robust microservice using Fastify, NATS JetStream, and TypeScript. You’ll see why this combination delivers exceptional performance while keeping our code maintainable.
Setting up requires Node.js 18+ and NATS Server. After initializing our project, we install core dependencies:
npm install fastify @fastify/cors nats typescript
Our TypeScript configuration focuses on strict typing and modern features. Notice how we enable decorator support - crucial for our architecture:
{
"compilerOptions": {
"target": "ES2022",
"strict": true,
"esModuleInterop": true,
"experimentalDecorators": true
}
}
Let’s create our core event interfaces. TypeScript’s discriminated unions ensure type safety across our system:
interface BaseEvent {
id: string;
type: 'user.created' | 'order.placed';
timestamp: Date;
}
interface UserCreatedEvent extends BaseEvent {
type: 'user.created';
userId: string;
email: string;
}
Configuration management is vital. We centralize settings with environment-aware defaults:
export const config = {
server: {
port: parseInt(process.env.PORT || '3000'),
host: process.env.HOST || '0.0.0.0'
},
nats: {
servers: (process.env.NATS_SERVERS || 'nats://localhost:4222').split(',')
}
};
Building our Fastify instance, we prioritize security and observability:
const app = Fastify({ logger: true });
app.register(require('@fastify/helmet'));
app.register(require('@fastify/cors'), {
origin: true
});
app.get('/health', async () => {
return { status: 'ok', natsConnected: natsService.isConnected() };
});
Connecting to NATS JetStream involves creating a persistent connection. Notice how we handle reconnections automatically:
import { connect, JetStreamClient } from 'nats';
const natsConnection = await connect({
servers: config.nats.servers,
reconnect: true,
maxReconnectAttempts: 10
});
const jetStream = natsConnection.jetstream();
Publishing events becomes straightforward with our typed interface. Have you considered how message deduplication prevents duplicate processing?
async function publishEvent(event: BaseEvent) {
const payload = JSON.stringify(event);
await jetStream.publish(`events.${event.type}`, payload, {
msgID: event.id // Enables deduplication
});
}
For event consumption, we create durable consumers to survive restarts:
const consumer = await jetStream.consumers.get('order-processor', {
durable_name: 'order-processor',
ack_policy: AckPolicy.Explicit
});
for await (const message of consumer) {
try {
const event = JSON.parse(message.data) as OrderPlacedEvent;
processOrder(event);
message.ack(); // Explicit acknowledgement
} catch (error) {
message.nak(); // Negative acknowledgement for retry
}
}
Error handling deserves special attention. We implement graceful shutdown to prevent message loss during termination:
process.on('SIGTERM', async () => {
app.log.info('Shutting down');
await consumer.destroy();
await natsConnection.drain();
await app.close();
});
Testing event flows requires simulating real-world conditions. We use NATS’ built-in testing capabilities:
test('order processing workflow', async () => {
await publishTestEvent(mockOrderEvent);
await expect(orderProcessed).resolves.toMatchObject({
orderId: mockOrderEvent.data.orderId
});
});
In production, we monitor key metrics like message processing latency and error rates. NATS’ instrumentation gives us crucial visibility into backpressure situations. How might we scale this horizontally? Simply add more consumer instances - JetStream automatically load balances across them.
Performance optimization starts at the protocol level. NATS’ binary protocol and Fastify’s efficient routing handle over 100,000 messages/second on modest hardware. We further boost throughput by batching non-critical operations.
Deploying to Kubernetes? Our health endpoints integrate seamlessly with liveness probes. Remember to configure NATS clustering for high availability - a three-node cluster provides excellent fault tolerance.
Common pitfalls include unacknowledged messages piling up and type validation gaps. We combat these with automated schema validation at ingress:
app.post('/events', {
schema: {
body: userCreatedEventSchema // JSON Schema definition
}
}, async (request) => {
await publishEvent(request.body);
});
This journey showed me how event-driven systems create resilient, scalable architectures. The synergy between Fastify’s speed, TypeScript’s safety, and JetStream’s reliability forms a powerful foundation for modern applications. What challenges have you faced with distributed systems? Share your experiences below - I’d love to hear your solutions and encourage you to share this article with others exploring microservices.