I’ve been building web applications for years, and one of the most consistently troublesome features has always been file uploads. I can’t count the times I’ve seen a system grind to a halt because a user decided to upload a high-resolution video, or the security headaches that come with accepting files from the internet. It’s a problem that seems simple on the surface but hides layers of complexity. That’s why I want to talk to you about a better way. Today, we’re going to build a system that handles files securely, efficiently, and with the kind of type safety that lets me sleep at night. If you’ve ever struggled with uploads that crash your server or leave you vulnerable, stick with me. This approach changed everything for my projects.
Let’s start with the core issue. The old way, the way many tutorials still teach, involves sending the file directly to your Node.js server. Your server receives the raw bytes, saves them to disk or memory, and then forwards them to a storage service like AWS S3. This method has a critical flaw: it turns your application server into a traffic cop for every single byte of data. Your server’s resources are consumed reading and writing files, which it was never designed to do at scale. The bandwidth costs double, and a few large uploads can bring your entire API to its knees. Have you ever wondered why your app gets slow when files are being uploaded? This is often why.
There’s a smarter path. Instead of being the middleman, your server can act as a trusted gatekeeper. It can issue a temporary, secure key—a presigned URL—that allows a user’s browser to upload a file directly to AWS S3. Your server never touches the file data. It only handles the metadata, the permissions, and the business logic. This shift is profound. It offloads the heavy lifting to a service built for it, saving your server’s CPU and bandwidth for what it does best: running your application code. The user experience improves, too, as uploads can be faster and more reliable.
So, how does it work in practice? Imagine a user selects a file in your web app. Before any upload begins, your frontend asks your backend for permission. The backend checks who the user is, validates the file type and size, and then generates a special URL. This URL is a direct line to a specific spot in your S3 bucket, but it only works for a short time and only for that one operation. The browser uses this URL to send the file straight to S3. Once the upload is complete, S3 can notify your backend so you can update your database and trigger any necessary processing.
Let’s get our hands dirty with some code. The first step is setting up a solid foundation. I always start with TypeScript because it catches errors before they happen. Here’s a basic setup for our core upload service.
// src/modules/upload/upload.service.ts
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { z } from "zod";
// Define what a valid upload request looks like
const uploadRequestSchema = z.object({
fileName: z.string().min(1),
fileType: z.string().regex(/^image\/|^video\/|^application\/pdf$/), // Example: allow images, videos, PDFs
fileSize: z.number().int().positive().max(100 * 1024 * 1024), // Max 100MB
});
export class UploadService {
private s3Client: S3Client;
constructor() {
this.s3Client = new S3Client({ region: process.env.AWS_REGION });
}
async generatePresignedUrl(userId: string, requestData: unknown) {
// Validate the incoming data
const validated = uploadRequestSchema.parse(requestData);
// Create a unique key for the file in S3
const s3Key = `uploads/${userId}/${Date.now()}-${validated.fileName}`;
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: s3Key,
ContentType: validated.fileType,
// Metadata can be added here
Metadata: {
uploadedBy: userId,
},
});
// Generate a URL that expires in 5 minutes
const presignedUrl = await getSignedUrl(this.s3Client, command, {
expiresIn: 300,
});
return {
presignedUrl,
s3Key,
expiresAt: new Date(Date.now() + 300 * 1000).toISOString(),
};
}
}
This code does a few important things. It uses Zod to define strict rules for what can be uploaded. No more guessing about file types or sizes. It creates a structured path in S3, which helps with organization and permissions. The presigned URL is generated with a short lifespan, reducing the risk if it were somehow intercepted. What do you think happens if someone tries to upload a file type we haven’t allowed? The validation step catches it immediately, and no URL is issued.
Now, we need to handle the other end of the process. Once the file is in S3, we should confirm it and maybe process it. We can use S3 Event Notifications to tell our system when a new file arrives. This allows for asynchronous workflows, like generating thumbnails or scanning for viruses, without blocking the user. Here’s a simple way to handle that confirmation in our API.
// In our upload controller
import { Request, Response } from 'express';
export const confirmUpload = async (req: Request, res: Response) => {
const { s3Key, checksum } = req.body; // Client sends the key and a file hash
const userId = req.user.id; // From authentication middleware
// Update our database record
await db.fileUpload.update({
where: { s3Key, userId },
data: {
status: 'UPLOADED',
checksum,
completedAt: new Date(),
},
});
// Kick off async processing
await queueProcessingJob(s3Key);
res.json({ success: true, message: 'Upload confirmed, processing started.' });
};
This confirms the upload from the client’s perspective and moves the file record to a new state. The actual processing, like virus scanning, happens in the background. This keeps our API responses fast and responsive. Have you considered what you need to do if the checksum doesn’t match? That’s a sign the file might have been corrupted during transfer, and you’d need to handle that error gracefully.
Security is non-negotiable. Presigned URLs are powerful, but we must be careful. Always validate on the server side before generating the URL. Use environment variables for credentials, and set tight bucket policies on S3. For instance, your S3 bucket should only allow writes via presigned URLs and reads from specific CDN origins, not public access. Here’s a snippet of a safe bucket policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPresignedUploads",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::your-bucket-name/uploads/*",
"Condition": {
"StringEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
This policy ensures that uploads must use server-side encryption, adding an extra layer of protection. It’s a small detail that makes a big difference. I learned this the hard way after a minor oversight in a past project.
Let’s talk about the user experience. From the frontend, the process is straightforward. You get the presigned URL from your backend, then use the Fetch API or a library like Axios to PUT the file directly to S3. You can even track progress for large files. Here’s a minimal frontend example.
async function uploadFile(file) {
// First, get the presigned URL from your backend
const response = await fetch('/api/upload/request', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileName: file.name,
fileType: file.type,
fileSize: file.size,
}),
});
const { presignedUrl, s3Key } = await response.json();
// Now upload directly to S3
const uploadResponse = await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
},
});
if (uploadResponse.ok) {
// Notify your backend that upload is complete
await fetch('/api/upload/confirm', {
method: 'POST',
body: JSON.stringify({ s3Key }),
});
console.log('Upload successful!');
} else {
console.error('Upload failed');
}
}
This separates the concerns nicely. The backend manages permissions and logic; the frontend handles the transfer. It’s clean and efficient.
But what about very large files? For files over 100MB, you might want to use S3’s multipart upload feature. This breaks the file into parts, uploads them in parallel, and then assembles them in S3. It’s more complex but essential for reliability with big files. The principle remains the same: your backend authorizes each part with a presigned URL. It’s a bit more code, but the pattern is similar.
Throughout this journey, I’ve found that type safety is your best friend. By using TypeScript and Zod, we define clear contracts for our data. This prevents so many runtime errors. For example, when we expect a file size as a number, we ensure it’s a number before we use it. No more mysterious bugs from unexpected strings or null values.
We’ve covered a lot. We moved from a brittle, server-heavy upload system to a streamlined, secure, and scalable architecture. Your Node.js server is free to focus on application logic, S3 handles the storage, and everyone benefits from faster, more reliable uploads. The code examples I’ve shared are starting points; you can extend them with more features like virus scanning, image optimization, or custom metadata.
I hope this guide shows you a better way to handle file uploads. It transformed how I build features, making them more robust and maintainable. If you found this useful, I’d love to hear about your experiences. Did this approach solve a problem for you? Do you have other tips to share? Please like, share, or comment below to let me know your thoughts and to help others discover this method. Let’s build more resilient applications together.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva