I remember staring at a production incident where a user uploaded a 50MB TIFF file that wasn’t an image at all — it was a renamed executable. The server crashed, my pager buzzed, and I realized our punny two-line multer middleware was a liability. Since then, I’ve been obsessed with building upload pipelines that are type-safe, secure, and optimized. Why? Because every single time I see a hastily bolted-on upload endpoint in a codebase, I know there’s a disaster waiting — corrupted storage, security holes, or images that look pixelated on mobile. This article is my attempt to share what I’ve learned.
Let me start with a confession: I used to trust the mimetype property on the uploaded file blindly. That’s like trusting a stranger who says “I’m a doctor” just because they have a white coat. Multer sets the mimetype based on the HTTP request’s content-type header, which any client can forge. So how do we really know what kind of file we’re getting? The answer lies in reading the actual magic bytes at the start of the file. Node’s file-type package does that. Here’s a minimal validator I now use:
import { fileTypeFromBuffer } from 'file-type';
export async function validateFileType(buffer: Buffer, allowedTypes: string[]) {
const type = await fileTypeFromBuffer(buffer);
if (!type || !allowedTypes.includes(type.mime)) {
throw new Error(`Forbidden file type: ${type?.mime || 'unknown'}`);
}
return type.mime;
}
This little function saved me from accepting a GIF that was actually a Windows executable. Now, every upload goes through it — right after multer places the file in memory.
You might ask: why memory storage instead of disk? Because we’re streaming straight to AWS S3 anyway. No need to write to disk and then read it back. Multer’s memory storage keeps the file as a Buffer in the request object. That buffer is our ticket to validation, transformation with Sharp, and finally the S3 upload. Here’s how I configure multer:
import multer from 'multer';
const storage = multer.memoryStorage();
const upload = multer({
storage,
limits: { fileSize: 10 * 1024 * 1024 }, // 10 MB
fileFilter: (req, file, cb) => {
// We'll do real validation later, but reject obviously wrong MIMEs early
const allowed = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
if (allowed.includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error(`Unsupported file type: ${file.mimetype}`));
}
}
});
Notice I haven’t turned off my curiosity yet. Even with that first filter, a malicious user could upload a file with a valid MIME type but rename a binary. That’s why we call our validateFileType function after multer runs, using the buffer.
Now, after validation, what’s the smartest way to store images in S3? Let the cloud worry about scale, but you worry about cost and performance. Raw images are huge. Sharp can resize and convert them before they ever hit the bucket. Let me show you a pipeline that reduces an uploaded image to a reasonable width while preserving aspect ratio, converts it to WebP (smaller than JPEG), and compresses it – all in memory.
import sharp from 'sharp';
interface TransformOptions {
width?: number;
height?: number;
fit?: 'cover' | 'contain' | 'fill' | 'inside' | 'outside';
format?: 'jpeg' | 'png' | 'webp' | 'avif';
quality?: number;
}
async function transformImage(buffer: Buffer, options: TransformOptions): Promise<Buffer> {
let pipeline = sharp(buffer);
if (options.width || options.height) {
pipeline = pipeline.resize({
width: options.width,
height: options.height,
fit: options.fit || 'cover',
withoutEnlargement: true,
});
}
if (options.format && options.quality) {
pipeline = pipeline.toFormat(options.format, { quality: options.quality });
} else if (options.format) {
pipeline = pipeline.toFormat(options.format);
}
return pipeline.toBuffer();
}
I use a default resize to 1200px width for photos displayed on the web – that covers most screens without wasting bytes. But you can make it configurable per upload request.
Once the file is transformed, we upload it to S3. The AWS SDK v3 client is straightforward, but I always add a retry mechanism because network failures happen. Here’s a helper I wrote:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
async function uploadWithRetry(
client: S3Client,
params: { Bucket: string; Key: string; Body: Buffer; ContentType: string; },
retries = 3
): Promise<boolean> {
for (let attempt = 1; attempt <= retries; attempt++) {
try {
await client.send(new PutObjectCommand(params));
return true;
} catch (error) {
if (attempt === retries) throw error;
// exponential backoff
await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 100));
}
}
return false;
}
Now, when do you need a pre-signed URL? For client-side uploads where you want the user to upload directly to S3 without your server processing the file. But that’s not safer – you still need server-side validation. The best pattern: the client asks your server for a signed URL, the server validates that the request is allowed (e.g., user is authenticated, wants to upload a profile picture), generates a expiration-limited URL, and returns it. The client then uploads directly. Here’s how to generate one:
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getS3Client } from '../config/s3';
async function generatePresignedUploadUrl(
fileName: string,
contentType: string,
expirySeconds: number = 3600
): Promise<string> {
const client = getS3Client();
const params = {
Bucket: process.env.S3_BUCKET_NAME!,
Key: `uploads/${crypto.randomUUID()}-${fileName}`,
ContentType: contentType,
};
const command = new PutObjectCommand(params);
const url = await getSignedUrl(client, command, { expiresIn: expirySeconds });
return url;
}
But wait – if the client uploads directly, how do we validate the file? You can’t easily inspect the contents on the client side reliably. That’s why I still recommend a two-step process: the client calls a pre-signed URL, but then sends a separate confirmation API call with the file’s metadata (hash, size, etc.) that your server can verify using a callback notification from S3 (like S3 Event Notifications to Lambda). That’s advanced, but for most small to mid-size projects, you can trust the user and validate on the server after the upload using an S3 head object.
One thing I’ve learned the hard way: never store files with original filenames. Always prefix with a UUID or a timestamp to prevent collision and path traversal. I also add a folder structure based on date: profile_pictures/2024/10/. That makes life easier when you need to expire old files.
Let me tie it all together. Here’s a simplified Express route that accepts an upload, validates, transforms, and saves:
import { Router, Request, Response } from 'express';
import { upload } from '../middleware/upload.middleware';
import { validateFileType } from '../utils/file-validator';
import { transformImage } from '../services/image.service';
import { uploadWithRetry } from '../utils/retry';
import { getS3Client } from '../config/s3';
const router = Router();
router.post('/upload', upload.single('file'), async (req: Request, res: Response) => {
try {
const file = req.file;
if (!file) return res.status(400).json({ error: 'No file provided' });
// 1. Validate magic bytes
await validateFileType(file.buffer, ['image/jpeg', 'image/webp', 'image/png']);
// 2. Transform
const optimizedBuffer = await transformImage(file.buffer, {
width: 1200,
format: 'webp',
quality: 85,
});
// 3. Upload
const key = `uploads/${Date.now()}-${crypto.randomUUID()}.webp`;
const s3Client = getS3Client();
await uploadWithRetry(s3Client, {
Bucket: process.env.S3_BUCKET_NAME!,
Key: key,
Body: optimizedBuffer,
ContentType: 'image/webp',
});
// 4. Return public URL
res.json({ url: `https://${bucket}.s3.region.amazonaws.com/${key}` });
} catch (error) {
console.error('Upload failed:', error);
res.status(500).json({ error: 'Upload failed' });
}
});
You’ll notice I didn’t add authentication here – I assume you have your own middleware for that. Also, I recommend wrapping the whole handler with an error boundary that returns structured errors rather than raw exceptions. But that’s for another day.
If you’ve read this far, you’re probably nodding along, recalling your own messy upload code. I know I wrote plenty of it. The shift to type-safe, validated pipelines isn’t just about security – it’s about confidence. When I deploy now, I know that no JPEG named “photo.exe” will reach my bucket, and that every image is sized appropriately for the web, saving bandwidth and money.
So here’s my question for you: What’s the worst thing that ever happened because of an unchecked file upload in your project? I’d love to hear your story in the comments below. If this guide helped you think differently about uploads, please like and share it with your team – because safe uploads shouldn’t be a secret.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva