Build Resumable File Uploads with tus, AWS S3, Express, and TypeScript

Learn to build resumable file uploads with tus, AWS S3, Express, and TypeScript for reliable, scalable uploads that survive network failures.

Build Resumable File Uploads with tus, AWS S3, Express, and TypeScript

I remember the moment I tried to upload a 2GB video file to a web app I was building. The upload failed at 97% because my WiFi briefly dropped. The entire file had to be sent again from scratch. That was the day I stopped trusting simple multipart uploads and started searching for something better. What I found was the tus protocol — an open, HTTP-based standard that lets you resume uploads from exactly where they stopped. No more tossing gigabytes into the void. And when you pair tus with AWS S3, you get a production‑ready pipeline that scales horizontally and handles failures gracefully. Today I’ll walk you through building exactly that, step by step, with full TypeScript safety.

Why does this matter for you? Because file uploads are everywhere — profile pictures, documents, video submissions, backups. If your users ever face a broken connection, they might leave your service for good. The tus protocol is used by Vimeo, GitHub, and Cloudflare to solve exactly this problem. And by the time we’re done, you’ll have a resumable upload service that can handle files up to hundreds of megabytes with minimal server memory.

Let me start by setting up the project. Create a new directory and initialize it with pnpm. I prefer pnpm because it’s faster and enforces strict dependency resolution, but npm works fine too. Run the following commands:

mkdir tus-s3-upload-service && cd tus-s3-upload-service
pnpm init
pnpm add @tus/server @tus/s3-store express
pnpm add @aws-sdk/client-s3 @aws-sdk/lib-storage
pnpm add jsonwebtoken multer zod dotenv
pnpm add -D typescript @types/node @types/express
pnpm add -D @types/jsonwebtoken tsx nodemon

Now create a tsconfig.json with strict mode enabled. I always do this because it catches bugs early. Here’s mine:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "rootDir": "./src",
    "outDir": "./dist",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "resolveJsonModule": true,
    "declaration": true,
    "sourceMap": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}

Next, create a .env file with your AWS credentials and some upload settings. I keep my environment variables in a .env file that I never commit to version control. Here’s a sample:

PORT=3000
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
S3_BUCKET=your-tus-uploads-bucket
JWT_SECRET=your_super_secret_key_here
MAX_FILE_SIZE_BYTES=524288000
CHUNK_SIZE_BYTES=5242880
UPLOAD_EXPIRY_SECONDS=3600

Notice the CHUNK_SIZE_BYTES — I set it to 5MB because that’s the minimum part size for S3 multipart uploads. If you make chunks smaller than 5MB, S3 will reject them. Also, MAX_FILE_SIZE_BYTES is 500MB, which is enough for most videos and large documents.

Now let’s configure the S3 bucket. You need to enable CORS so that the tus client in the browser can send requests. Also set a lifecycle rule to automatically abort incomplete multipart uploads after 24 hours — that’s important to avoid unnecessary storage costs if users start an upload and never finish it.

Here’s a simple function that does both:

import { S3Client, PutBucketCorsCommand, PutBucketLifecycleConfigurationCommand } from "@aws-sdk/client-s3";

const s3 = new S3Client({ region: process.env.AWS_REGION! });

export async function configureBucket(bucketName: string) {
  // CORS configuration for tus headers
  await s3.send(new PutBucketCorsCommand({
    Bucket: bucketName,
    CORSConfiguration: {
      CORSRules: [
        {
          AllowedOrigins: ["*"], // in production, restrict to your domain
          AllowedMethods: ["GET", "PUT", "POST", "DELETE", "HEAD"],
          AllowedHeaders: [
            "Authorization",
            "Content-Type",
            "Upload-Offset",
            "Upload-Length",
            "Tus-Resumable",
            "Upload-Metadata"
          ],
          ExposeHeaders: [
            "Upload-Offset",
            "Location",
            "Upload-Length",
            "Tus-Version",
            "Tus-Resumable",
            "Tus-Extension"
          ],
          MaxAgeSeconds: 3600
        }
      ]
    }
  }));

  // Lifecycle: abort incomplete multipart uploads after 1 day
  await s3.send(new PutBucketLifecycleConfigurationCommand({
    Bucket: bucketName,
    LifecycleConfiguration: {
      Rules: [
        {
          ID: "abort-incomplete-uploads",
          Status: "Enabled",
          Filter: { Prefix: "uploads/" },
          AbortIncompleteMultipartUpload: { DaysAfterInitiation: 1 }
        }
      ]
    }
  }));
}

You can run this function once when your application starts. I usually put it in a separate setup script.

Now comes the core of the upload service — the tus server. The @tus/server package makes it incredibly easy. You create a Server instance and attach it to an Express app. The S3 store handles all the chunk assembly and tracks offsets. Here’s how I set it up:

import { Server } from "@tus/server";
import { S3Store } from "@tus/s3-store";
import { S3Client } from "@aws-sdk/client-s3";

const s3Client = new S3Client({ region: process.env.AWS_REGION! });

const server = new Server({
  path: "/uploads",
  datastore: new S3Store({
    s3Client,
    bucket: process.env.S3_BUCKET!,
    partSize: 5 * 1024 * 1024, // 5MB
    useTags: true,
    prefix: "uploads/"
  }),
  maxSize: parseInt(process.env.MAX_FILE_SIZE_BYTES!),
});

You notice useTags: true — that enables using S3 object tags to store upload metadata like original filename and file type. The tus protocol sends metadata in the Upload-Metadata header as a base64‑encoded key‑value pair. The S3 store decodes that and stores it as tags.

Now I want to add authentication. Without it, anyone could upload arbitrary files to your S3 bucket. I use JWT tokens that the client sends in the Authorization header. Inside the tus server, you can hook into the onIncomingRequest event to validate the token. Let me show you a simple middleware pattern:

import jwt from "jsonwebtoken";

// This middleware runs before every tus request
server.on("beforeRequest", (req, res) => {
  const token = req.headers.authorization?.replace("Bearer ", "");
  if (!token) {
    res.statusCode = 401;
    res.end("Authorization required");
    return;
  }
  try {
    const decoded = jwt.verify(token, process.env.JWT_SECRET!);
    // You can attach the user to req.tus.user or similar
  } catch {
    res.statusCode = 403;
    res.end("Invalid token");
  }
});

This is a basic example. In a real app you’d validate the user’s quota, check allowed file types, etc.

But what about file type validation? The client can send any file. I want to scan for viruses or block dangerous extensions. The @tus/server allows you to define a hook called onUploadCreate. This hook fires when a new upload is initiated. I use it to verify the metadata and reject uploads that don’t match a whitelist of MIME types.

Here’s an example using Zod to validate the metadata:

import { z } from "zod";

const metadataSchema = z.object({
  filename: z.string().min(1),
  filetype: z.enum(["image/png", "image/jpeg", "video/mp4", "application/pdf"]),
});

server.on("onUploadCreate", (req, res, next) => {
  const metadata = parseMetadata(req.headers["upload-metadata"]);
  const result = metadataSchema.safeParse(metadata);
  if (!result.success) {
    res.statusCode = 400;
    res.end("Invalid file type");
    return;
  }
  next();
});

I’m glossing over the parseMetadata function — you’d decode the base64 header and turn it into an object.

Now let’s talk about the client side. The tus protocol works over HTTP, so you can use any HTTP client that supports chunked PATCH requests. The official tus-js-client is well‑maintained and handles all the low‑level details. Here’s a minimal browser implementation:

import * as tus from "tus-js-client";

const file = document.getElementById("file-input").files[0];
const upload = new tus.Upload(file, {
  endpoint: "https://your-server.com/uploads",
  retryDelays: [0, 1000, 3000, 5000],
  metadata: {
    filename: file.name,
    filetype: file.type,
  },
  headers: {
    Authorization: `Bearer ${getToken()}`,
  },
  onError: (error) => console.error("Upload failed", error),
  onProgress: (bytesUploaded, bytesTotal) => {
    const percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(2);
    console.log(`${percentage}%`);
  },
  onSuccess: () => console.log("Upload finished!"),
});

upload.start();

Notice the retryDelays array — if a chunk fails, the client waits 0 seconds, then 1 second, then 3, then 5 before giving up. Combined with tus’s ability to resume, this makes the upload very resilient to network hiccups.

I remember once I tested this by unplugging my Ethernet cable in the middle of a 100MB upload. When I plugged it back, the upload resumed from exactly where it left off. My heart rate went back to normal. That’s the power of the tus protocol.

Now, after the upload completes, you usually want to do something with the file — move it to a permanent location, transcode it, or trigger a notification. The tus server fires an onUploadFinish event. I use it to move the final object from the temporary uploads prefix to a permanent storage path.

import { CopyObjectCommand, DeleteObjectCommand } from "@aws-sdk/client-s3";

server.on("onUploadFinish", async (req, res, upload) => {
  const key = upload.storage.id; // something like "uploads/abc123"
  const permanentKey = `permanent/${upload.metadata.filename}`;

  await s3.send(new CopyObjectCommand({
    Bucket: process.env.S3_BUCKET!,
    CopySource: `${process.env.S3_BUCKET!}/${key}`,
    Key: permanentKey,
  }));

  await s3.send(new DeleteObjectCommand({
    Bucket: process.env.S3_BUCKET!,
    Key: key,
  }));

  // Now you can store the permanent URL in your database
});

You might also want to scan the file with ClamAV before moving it. I do that inside onUploadFinish using a simple child process that streams the file from S3. It’s not shown here for brevity, but the pattern works.

Now let’s talk about security beyond authentication. The tus protocol supports upload expiry — you can set a time limit within which the upload must be completed. I use this to prevent stale uploads from filling my S3 bucket. The @tus/s3-store has an uploadExpirationPeriodInSeconds option:

const store = new S3Store({
  s3Client,
  bucket: process.env.S3_BUCKET!,
  partSize: 5 * 1024 * 1024,
  useTags: true,
  prefix: "uploads/",
  uploadExpirationPeriodInSeconds: 3600, // 1 hour
});

If the upload isn’t finished within an hour, the S3 store automatically aborts the multipart upload and cleans up the parts. Nice, right?

What about rate limiting? You don’t want a single client to hammer your server with too many concurrent uploads. I use a simple in‑memory counter, but in production you’d use Redis. Here’s a quick middleware for Express that limits the number of active uploads per user:

const activeUploads = new Map<string, number>();

app.use("/uploads", (req, res, next) => {
  const userId = req.user.id; // from your auth middleware
  const count = activeUploads.get(userId) || 0;
  if (count >= 3) {
    res.status(429).send("Too many concurrent uploads");
    return;
  }
  activeUploads.set(userId, count + 1);
  res.on("finish", () => {
    if (req.method !== "PATCH" && req.method !== "POST") {
      activeUploads.set(userId, activeUploads.get(userId)! - 1);
    }
  });
  next();
});

This is a simplistic version — you’d need to decrement correctly when a chunk finishes, but it gives the idea.

Now, how do you test all this locally? I use a small Express app that also serves a simple HTML page with the tus client. I start the server with npx tsx src/index.ts and open the browser. Then I upload a file, watch the progress in the console, simulate a network failure by turning off my local server for a moment, and see the client resume.

Let me give you the full Express app skeleton:

import express from "express";
import { Server } from "@tus/server";
import { S3Store } from "@tus/s3-store";
import { S3Client } from "@aws-sdk/client-s3";
import cors from "cors";

const app = express();
app.use(cors());

const s3Client = new S3Client({ region: process.env.AWS_REGION! });

const tusServer = new Server({
  path: "/uploads",
  datastore: new S3Store({
    s3Client,
    bucket: process.env.S3_BUCKET!,
    partSize: 5 * 1024 * 1024,
    useTags: true,
    prefix: "uploads/",
    uploadExpirationPeriodInSeconds: 3600,
  }),
  maxSize: parseInt(process.env.MAX_FILE_SIZE_BYTES!),
  // Add your hooks here
});

app.all("/uploads*", tusServer.handle.bind(tusServer));

app.listen(3000, () => {
  console.log("Server running on port 3000");
});

You see how clean it is. The tus server handles all the logic — creating uploads with POST, uploading chunks with PATCH, checking status with HEAD, and finalizing with a DELETE (if needed). The S3 store abstracts all the multipart upload complexity.

I want to ask you: have you ever wondered how Vimeo or GitHub handles giant uploads without crashing? They use the same protocol. The difference is they have more layers of validation and scaling. But the core is identical.

Now let’s talk about scaling horizontally. The tus protocol is stateless if you use a shared data store like S3. That means you can run multiple Node.js instances behind a load balancer, and all of them can handle chunks for the same upload. The S3 store uses unique IDs and the Upload-Offset header to ensure atomicity. You don’t need sticky sessions. That’s a huge advantage over traditional multipart uploads.

One personal note: when I first tried to scale my earlier upload service, I ran into race conditions where two server instances would try to write the same chunk. With tus and S3, that never happens because each chunk has a fixed offset and S3’s multipart upload parts are independent. The final assembly is also atomic.

Finally, let me leave you with a conclusion. File uploads don’t have to be fragile. Using the tus protocol with AWS S3 gives you resumability, progress tracking, and scalability with very little code. The tools are mature and well‑documented. I encourage you to try this in your next project. Download the code I’ve shown, tweak it for your needs, and see how it handles network failures. You’ll be amazed at the resilience.

If you found this article useful, I’d appreciate if you could like it, share it with a colleague who struggles with uploads, and comment your thoughts or questions below. I read every comment and I’m happy to help you debug your specific setup. Let’s make uploads boringly reliable together.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

// Our Network

More from our team

Explore our publications across finance, culture, tech, and beyond.

// More Articles

Similar Posts