I’ve spent countless hours wrestling with file uploads in various projects, often facing issues like server crashes, slow performance, and security vulnerabilities. That’s why I decided to build a high-performance file upload system using Fastify, Multer, and AWS S3. If you’ve ever dealt with unreliable uploads or wanted to scale your file handling, this guide will show you how to create a robust solution from scratch.
Let’s start by setting up our project. Why begin with the basics? Because a solid foundation prevents countless headaches later. We’ll use TypeScript for type safety and modern Node.js features. Here’s how to initialize the project and install dependencies:
npm init -y
npm install fastify @fastify/multipart aws-sdk multer uuid
npm install -D typescript @types/node ts-node
Next, configure TypeScript in tsconfig.json to ensure smooth development. I always set strict mode to catch errors early.
Now, let’s build the Fastify server. Have you considered how multipart handling affects performance? Fastify’s built-in multipart support streams files directly, reducing memory usage. Here’s a basic server setup:
import fastify from 'fastify';
import multipart from '@fastify/multipart';
const server = fastify();
await server.register(multipart, {
limits: { fileSize: 100 * 1024 * 1024 }
});
server.post('/upload', async (request, reply) => {
const data = await request.file();
// Process file here
});
await server.listen({ port: 3000 });
Integrating Multer adds middleware capabilities. It handles file parsing and storage options seamlessly. But why use both Fastify multipart and Multer? Multer offers additional validation and processing features that complement Fastify’s streaming.
When it comes to AWS S3, direct streaming saves bandwidth and time. Instead of buffering files locally, we pipe them straight to S3. Here’s a snippet for S3 uploads:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3Client = new S3Client({ region: 'us-east-1' });
const uploadToS3 = async (fileStream: Readable, key: string) => {
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: key,
Body: fileStream
});
return await s3Client.send(command);
};
Large files require careful memory management. Did you know that streaming prevents your server from holding entire files in memory? This approach handles gigabyte-sized files without breaking a sweat.
Security is non-negotiable. I always validate file types and sizes on the server side, even if client-side checks exist. Multer makes this straightforward with its fileFilter option.
Tracking upload progress keeps users informed. How can we implement this without complicating the code? We use events from the stream to update progress in real-time.
Resumable uploads are a game-changer for poor connections. By implementing chunked uploads, users can pause and resume transfers. AWS S3 supports multipart uploads natively for this purpose.
Error handling must be comprehensive. I log errors for debugging and return user-friendly messages. Fastify’s error hooks help centralize this logic.
Testing ensures reliability. I write unit tests for upload logic and integration tests for end-to-end flows. Mocking AWS services during tests prevents unnecessary costs.
Performance optimization involves tuning multipart limits and using CDNs. Fastify’s lightweight nature helps maintain low latency even under heavy load.
Deploying to production requires monitoring and scaling. I use Docker for consistency and set up alerts for upload failures.
Building this system taught me that simplicity and efficiency go hand in hand. Every decision, from streaming to validation, contributes to a seamless user experience.
If you found this guide helpful, please like, share, and comment with your experiences. Your feedback helps improve content for everyone. Let’s build better systems together!