js

Secure Large File Uploads with Node.js, AWS S3, and Presigned URLs

Learn secure, scalable large file uploads with Node.js, AWS S3, and presigned URLs. Reduce server load and improve reliability today.

Secure Large File Uploads with Node.js, AWS S3, and Presigned URLs

Ever tried uploading a large file, only to watch it fail at 99%? Or worried about the security risks of letting users upload anything to your server? I hit these problems head-on while building a web application that needed to handle user-generated content. The traditional method, where files travel through your backend, often bogs down performance and creates a single point of failure. It made me look for a better way—a method that is secure, scalable, and keeps your server agile. This led me to a powerful combination: Node.js, AWS S3, and presigned URLs. Let’s walk through how you can set this up.

How does it work? Instead of sending a file to your Node.js server, your server gives the client a special, time-limited URL. The client uses this URL to upload the file directly to an AWS S3 bucket. Your server never touches the actual file bytes, freeing up resources for other tasks. You maintain complete control over who gets a URL and what they can upload, but you offload the heavy lifting to AWS’s robust infrastructure.

First, we need to set up our project. Create a new directory and initialize it. We’ll use Express, the AWS SDK, and Zod for validation.

npm init -y
npm install express @aws-sdk/client-s3 @aws-sdk/s3-request-presigner zod dotenv
npm install -D typescript @types/node @types/express ts-node-dev

Our environment file holds the keys. Never hardcode these.

AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_key_here
AWS_SECRET_ACCESS_KEY=your_secret_here
S3_BUCKET_NAME=your-upload-bucket
PRESIGNED_URL_EXPIRY=300
MAX_FILE_SIZE=52428800

Now, let’s build the core of our server. We start by configuring the AWS S3 client. Why create a single instance? It’s efficient and a best practice recommended by the SDK itself.

// src/libs/s3Client.ts
import { S3Client } from '@aws-sdk/client-s3';
import { fromEnv } from '@aws-sdk/credential-providers';

const s3Client = new S3Client({
  region: process.env.AWS_REGION,
  credentials: fromEnv()
});

export { s3Client };

Before we hand out any upload permissions, we must validate the request. What if a user tries to upload a massive executable file? We define clear rules using Zod.

// src/validators/uploadValidator.ts
import { z } from 'zod';

const GenerateUploadUrlSchema = z.object({
  fileName: z.string().min(1, "Filename is required"),
  fileType: z.string().regex(/^image\/\w+$|^application\/pdf$/, "Only images and PDFs are allowed"),
  fileSize: z.number().max(Number(process.env.MAX_FILE_SIZE), "File is too large")
});

export { GenerateUploadUrlSchema };

The magic happens in the route handler. This is where we generate the presigned URL. Think of it as a secure, one-time ticket for the client.

// src/routes/upload.ts
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { s3Client } from '../libs/s3Client.js';
import { GenerateUploadUrlSchema } from '../validators/uploadValidator.js';

app.post('/generate-upload-url', async (req, res) => {
  const validation = GenerateUploadUrlSchema.safeParse(req.body);
  if (!validation.success) {
    return res.status(400).json({ error: validation.error.errors });
  }

  const { fileName, fileType } = validation.data;
  const fileKey = `uploads/${Date.now()}-${fileName}`;

  const command = new PutObjectCommand({
    Bucket: process.env.S3_BUCKET_NAME,
    Key: fileKey,
    ContentType: fileType,
  });

  try {
    const uploadUrl = await getSignedUrl(s3Client, command, {
      expiresIn: Number(process.env.PRESIGNED_URL_EXPIRY)
    });

    res.json({
      uploadUrl,
      fileKey,
      expiresIn: process.env.PRESIGNED_URL_EXPIRY
    });
  } catch (error) {
    res.status(500).json({ error: 'Could not generate upload URL' });
  }
});

What happens after the client gets the URL? They can use a simple fetch call to upload their file directly to S3. Your server is already done with its job.

// Example Client-Side Code
async function uploadFile(file, signedUrl) {
  const response = await fetch(signedUrl, {
    method: 'PUT',
    headers: { 'Content-Type': file.type },
    body: file
  });

  if (response.ok) {
    console.log('Upload successful!');
  } else {
    console.error('Upload failed.');
  }
}

But are we done? Not quite. How can we be sure the file arrived correctly? We should implement a verification step. The client can notify our server after a successful upload, and we can check the file exists in S3.

Security is paramount. Our S3 bucket must be configured correctly. It should not allow public uploads directly. The presigned URLs are the only gate. We also set up CORS rules to allow our web app to interact with the bucket.

// Example S3 Bucket CORS Configuration
[
    {
        "AllowedHeaders": ["*"],
        "AllowedMethods": ["PUT"],
        "AllowedOrigins": ["https://yourdomain.com"],
        "ExposeHeaders": []
    }
]

What about very large files, like videos? For these, AWS offers Multipart Uploads. You can extend this system by generating presigned URLs for each part, allowing the client to upload in manageable chunks. It’s a more complex process but follows the same principle of direct, serverless transfers.

This approach transforms your application’s capabilities. Your backend becomes a traffic director, not a warehouse. It handles authentication and issues permissions, while AWS handles the storage and bandwidth. The result is a system that can scale effortlessly as your user base grows.

Wasn’t that simpler than wrestling with file streams on your own server? This method provides a clean separation of concerns. It improves security, reduces server load, and offers a better experience for your users. Give it a try in your next project.

If you found this guide helpful, please share it with a fellow developer who might be struggling with file uploads. Have you implemented a similar system? What challenges did you face? Let me know in the comments below—I’d love to hear about your experiences.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Keywords: Node.js file uploads, AWS S3 presigned URLs, secure file upload, large file upload, Express S3 integration



Similar Posts
Blog Image
Building Event-Driven Microservices with NestJS, RabbitMQ and MongoDB Complete Guide 2024

Learn to build scalable event-driven microservices with NestJS, RabbitMQ & MongoDB. Complete guide with error handling, monitoring & deployment best practices.

Blog Image
Complete Guide: Build Type-Safe GraphQL APIs with TypeGraphQL, Apollo Server, and Prisma

Learn to build type-safe GraphQL APIs with TypeGraphQL, Apollo Server & Prisma in Node.js. Complete guide with authentication, optimization & testing tips.

Blog Image
Building Event-Driven Architecture with Node.js EventStore and Docker: Complete Implementation Guide

Learn to build scalable event-driven systems with Node.js, EventStore & Docker. Master Event Sourcing, CQRS patterns, projections & microservices deployment.

Blog Image
Build Real-time Collaborative Document Editor: Socket.io, Operational Transforms & Redis Tutorial

Learn to build real-time collaborative document editing with Socket.io, Operational Transforms & Redis. Complete tutorial with conflict resolution, scaling, and performance optimization tips.

Blog Image
How to Integrate Prisma with Next.js: Complete Guide for Type-Safe Full-Stack Development

Learn how to integrate Prisma with Next.js for type-safe full-stack development. Build modern TypeScript apps with seamless database connectivity and enhanced DX.

Blog Image
Build Production-Ready Event-Driven Microservices with NestJS, RabbitMQ, and MongoDB

Learn to build production-ready event-driven microservices with NestJS, RabbitMQ & MongoDB. Master message queuing, event sourcing & distributed systems deployment.