I’ve been thinking a lot about file uploads lately. If you’ve ever built a web application, you know this simple task can quickly become complicated. One day, everything works on your machine. The next, you’re dealing with broken images, crashed servers, or security warnings. It doesn’t have to be this hard. Today, I want to walk you through building a robust system that handles files with care. We’ll go from receiving a file to safely storing it, with stops for resizing images and checking for viruses. Let’s build something that won’t keep you up at night.
First, we need a solid foundation. Our project will use Node.js with TypeScript. Why TypeScript? It catches mistakes before your code runs. Think of it as a helpful friend who points out when you’re about to send text where a picture should go. We’ll organize our work into clear services, each with a single job.
Setting up the project is straightforward. We create a new directory and install our tools. These tools are our building blocks. Express will handle web requests. Multer will manage file uploads. Sharp will process images. AWS SDK will talk to cloud storage.
mkdir file-upload-pipeline && cd file-upload-pipeline
npm init -y
npm install express multer sharp @aws-sdk/client-s3
We must define our environment securely. Using a validation library like Zod prevents configuration errors. It ensures required keys, like our cloud storage credentials, are present before the app starts. This step avoids those confusing crashes in production.
import { z } from 'zod';
const EnvSchema = z.object({
AWS_S3_BUCKET: z.string(),
JWT_SECRET: z.string().min(32),
});
export const env = EnvSchema.parse(process.env);
Now, let’s receive a file. The Multer library helps with this. We configure it to accept files into memory. We also set limits. What’s the point of allowing a 10GB file if it will only crash your server? We define reasonable size boundaries.
import multer from 'multer';
const upload = multer({
storage: multer.memoryStorage(),
limits: { fileSize: 100 * 1024 * 1024 } // 100MB
});
app.post('/upload', upload.single('file'), handleUpload);
What happens after we get the file? We shouldn’t just send it off. We need to know what it is. Is it an image? A PDF? A potential threat? We check the file’s MIME type against a list of allowed types. This is a basic but critical security gate.
A file passes the type check. If it’s an image, we can improve it. The Sharp library is fantastic for this. We can resize a large photo to a web-friendly dimension and convert it to an efficient format like WebP. This saves storage space and loads faster for your users.
import sharp from 'sharp';
async function processImage(buffer: Buffer) {
return await sharp(buffer)
.resize(1200, 800, { fit: 'inside' })
.webp({ quality: 80 })
.toBuffer();
}
Would you store a package without knowing its contents? Probably not. The same goes for files. Integrating a virus scan adds a powerful layer of security. We can use a tool like ClamAV to check the file’s content before it touches permanent storage. If a virus is found, we reject the upload and log the event.
With the file processed and checked, it’s time for storage. Uploading directly from your server to a bucket can work, but it uses your bandwidth. There’s a better way. We can generate a special, temporary URL that lets the user’s browser upload directly to the storage service. This is called a presigned URL.
import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
const client = new S3Client({ region: 'us-east-1' });
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: `uploads/${fileName}`,
});
const url = await getSignedUrl(client, command, { expiresIn: 3600 });
This URL expires in an hour, limiting its risk. We send this URL back to the frontend. The frontend code then uses it to upload the file directly. This method is efficient and secure. But what about very large files, like a long video? They can fail if the connection drops. Have you ever had to restart a long upload from zero?
To solve this, we implement resumable uploads. The idea is to split the file into smaller pieces, or chunks. Each chunk is uploaded separately. The server keeps track of which chunks are received. If the upload stops, it can later resume from the last successful chunk. This requires more logic but provides a much smoother user experience.
All these actions—receiving, checking, processing, tracking chunks—need to be recorded. We use a database table to track each upload session. It holds the file name, status, size, and how many chunks are done. This record is our source of truth for the upload’s progress.
CREATE TABLE upload_sessions (
id UUID PRIMARY KEY,
file_name TEXT NOT NULL,
status VARCHAR(50) NOT NULL,
total_chunks INTEGER NOT NULL,
uploaded_chunks INTEGER DEFAULT 0
);
Finally, we wrap everything in a secure blanket. We add authentication to ensure only logged-in users can upload. We implement rate limiting to prevent abuse. We set strict CORS policies. We use HTTPS everywhere. Security is not a single feature; it’s the result of many careful decisions.
Building this pipeline requires thought, but the payoff is huge. You get a system that is reliable, efficient, and secure. It scales gracefully and gives users a professional experience. You stop worrying about “what if” and start trusting your tools.
I hope this guide gives you a clear path forward. File uploads are a common challenge, but with the right approach, they become a strength of your application. What part of your current upload process feels the most fragile? Share your thoughts in the comments below. If you found this helpful, please like and share it with another developer who might be battling their own upload system. Let’s build better software, together.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva