js

How to Build Real-Time APIs with Edge State Using Cloudflare Durable Objects

Learn how to create low-latency, globally synchronized APIs using Cloudflare Durable Objects and edge state architecture.

How to Build Real-Time APIs with Edge State Using Cloudflare Durable Objects

I was building a chat feature for a global team when I hit a wall. How do you keep everyone’s messages in sync when they’re connecting from Tokyo, London, and San Francisco? Traditional databases added too much lag. That’s when I found a better way. Let’s talk about building APIs that feel local, no matter where your users are.

The secret is moving your application’s state to the edge, close to the user. This is different from just caching static files. We’re talking about live, interactive data. Imagine a counter that can be updated from anywhere in the world, instantly. Or a chat room where messages appear for everyone at the same time. This is what we can build.

Why does this matter? Speed. Every millisecond of delay costs you user engagement. When your data logic lives in one central data center, users on the other side of the planet pay a latency penalty. By distributing state, you remove that penalty. The user in Sydney interacts with a data instance that’s physically nearby.

So, how do we start? We need the right tools. For this, we’ll use Cloudflare’s platform. It provides a unique building block called a Durable Object. Think of it as a tiny, stateful server that can live at any of Cloudflare’s hundreds of locations. It’s a single-threaded environment that guarantees consistency for its data.

Let’s set up our project. First, make sure you have Node.js installed. Open your terminal and run these commands to create a new project and install the necessary tools.

mkdir global-api-project
cd global-api-project
npm init -y
npm install -D wrangler typescript
npx wrangler init

This creates a new project and installs Wrangler, the command-line tool for Cloudflare. It also sets up a basic TypeScript configuration. Next, we need to define our Durable Object. Create a file called src/Counter.ts.

export class Counter {
  state: DurableObjectState;
  value: number;

  constructor(state: DurableObjectState) {
    this.state = state;
    this.value = 0;
    this.state.blockConcurrencyWhile(async () => {
      let stored = await this.state.storage.get("value");
      this.value = stored || 0;
    });
  }

  async fetch(request: Request) {
    let url = new URL(request.url);
    
    if (url.pathname === "/increment") {
      this.value++;
      await this.state.storage.put("value", this.value);
      return new Response(this.value.toString());
    }
    
    return new Response(this.value.toString());
  }
}

What’s happening here? The Counter class is our Durable Object. Its state gives us access to persistent storage. In the constructor, we use blockConcurrencyWhile to safely load the last saved value. The fetch method handles HTTP requests. A POST to /increment increases the count and saves it. A simple GET returns the current value.

But this object is just a blueprint. We need to tell our main Worker script how to find and create instances of it. Let’s look at the main entry point, src/index.ts.

export interface Env {
  COUNTER: DurableObjectNamespace;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    let url = new URL(request.url);
    let name = url.searchParams.get("name") || "default";
    
    let id = env.COUNTER.idFromName(name);
    let obj = env.COUNTER.get(id);
    
    return obj.fetch(request);
  }
};

The Env interface declares we have a binding to a Durable Object namespace called COUNTER. In the fetch handler, we get a name from the query string. This name determines which specific counter instance we want. idFromName always generates the same ID for the same name. get(id) gives us a stub to communicate with that instance, wherever it is in the world.

We must declare these bindings in our configuration file, wrangler.toml.

name = "global-counter-api"
main = "src/index.ts"
compatibility_date = "2024-05-01"

[[durable_objects.bindings]]
name = "COUNTER"
class_name = "Counter"

[[migrations]]
tag = "v1"
new_classes = ["Counter"]

The durable_objects.bindings section creates the COUNTER namespace in our Worker’s environment. The migrations section tells the system about our new Durable Object class. This is required for deployment.

Now, run npx wrangler dev in your terminal. This starts a local development server. Open your browser to http://localhost:8787/. You should see 0. Now, send a POST request to increment it.

curl -X POST http://localhost:8787/increment

Refresh your browser. The counter is now 1. The state is persisted. Stop the dev server and start it again. Visit the page. It’s still 1. The storage survived the restart. This is the durability in Durable Objects.

But here’s a question: what happens if two users try to increment the counter at the exact same time? This is where the single-threaded model shines. Requests are queued and processed one after another. You never get a corrupted state from race conditions. The increment operation is atomic.

Let’s build something more complex. A chat room needs to handle real-time messages. We can use WebSockets with Durable Objects to manage connections. Create a new file, src/ChatRoom.ts.

export class ChatRoom {
  state: DurableObjectState;
  sessions: Set<WebSocket>;

  constructor(state: DurableObjectState) {
    this.state = state;
    this.sessions = new Set();
  }

  async fetch(request: Request) {
    let pair = new WebSocketPair();
    let [client, server] = Object.values(pair);
    
    this.state.acceptWebSocket(server);
    this.sessions.add(server);
    
    server.addEventListener("message", (event) => {
      for (let session of this.sessions) {
        if (session !== server) {
          session.send(event.data);
        }
      }
    });
    
    return new Response(null, { status: 101, webSocket: client });
  }
}

This object manages a set of WebSocket connections. When a new client connects, it’s added to the sessions set. When any client sends a message, the object broadcasts it to all other connected clients. The state.acceptWebSocket(server) call is crucial. It allows the Durable Object’s state to track the socket, so it can survive restarts.

How do we handle more permanent data, like chat history? We can combine in-memory state with persistent storage. Let’s modify our chat room to store the last 50 messages.

export class ChatRoomWithHistory {
  state: DurableObjectState;
  sessions: Set<WebSocket>;
  messages: string[];

  constructor(state: DurableObjectState) {
    this.state = state;
    this.sessions = new Set();
    this.messages = [];
    
    this.state.blockConcurrencyWhile(async () => {
      let stored = await this.state.storage.get<string[]>("messages");
      this.messages = stored || [];
    });
  }

  async fetch(request: Request) {
    // ... WebSocket setup as before ...
    
    // Send existing history to new client
    server.send(JSON.stringify({ type: "history", data: this.messages }));
    
    server.addEventListener("message", async (event) => {
      let message = event.data.toString();
      this.messages.push(message);
      
      // Keep only the last 50 messages
      if (this.messages.length > 50) {
        this.messages = this.messages.slice(-50);
      }
      
      // Persist the updated history
      await this.state.storage.put("messages", this.messages);
      
      // Broadcast
      for (let session of this.sessions) {
        if (session !== server) {
          session.send(message);
        }
      }
    });
    
    return new Response(null, { status: 101, webSocket: client });
  }
}

Now, when a user joins, they receive the recent message history. Every new message is saved to storage. Notice we limit the history to 50 messages. This is important. Durable Object storage is not meant for massive datasets. It’s perfect for session data, real-time state, or recent activity logs.

What about scaling? Each Durable Object is a single thread. For a chat room with 10,000 users, all messages still go through one instance. Is that a problem? It depends on your traffic. For most chat applications, the bottleneck is network I/O, not CPU. The object can handle broadcasting to many connections efficiently.

But what if you need 100,000 separate counters? That’s fine. Each counter is its own independent instance. They can be spread across many machines. The system automatically scales them. You access each one by its unique name or ID.

Let’s talk about the user experience. From a developer’s perspective, you call get(id) and then fetch(). You don’t need to know which physical machine the object is on. The system routes your request to it. If that machine is busy or fails, the object is moved automatically. Your code doesn’t change.

This is powerful for session management. Instead of storing session tokens in a global database, store them in a Durable Object keyed by user ID. The session data lives near the user’s current location. When they make a request, your Worker fetches their session object. The latency is minimal.

Are there limits? Yes. Each Durable Object has a memory limit. It’s designed for small, focused state. Don’t try to store a multi-gigabyte dataset in one. Also, while objects are durable, they are not instantly replicated globally. There’s a primary location. If that location goes offline, a brief delay might occur while a new instance is created elsewhere.

The cost model is based on two things: how many objects you have, and how much they do. An object that’s idle costs almost nothing. An object processing constant WebSocket traffic will incur more charges. It’s pay-for-what-you-use.

I find the mental model liberating. You write your stateful logic as a simple class. The platform handles distribution, persistence, and scaling. You stop worrying about database connections and server provisioning.

To deploy your API, run npx wrangler deploy. Your code is pushed to the edge. Instances of your Durable Objects will spin up in regions where they’re needed. A user in Europe will interact with an instance likely in Frankfurt or London. A user in Asia will get one in Singapore or Tokyo. All while maintaining a single, consistent state.

This changes how we design applications. We can build collaborative tools, multiplayer games, and real-time dashboards that are fast for everyone. The old model of a central application server is no longer the only option.

I encourage you to try it. Start with a simple counter. Then, build a session manager. Finally, create a collaborative sketchpad. You’ll see how straightforward it is to manage state at the edge. What kind of low-latency application have you wanted to build but thought was too complex?

If you found this guide helpful, please share it with other developers. Have you used edge state before? What was your experience? Leave a comment below—I’d love to hear what you’re building and answer any questions.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Keywords: cloudflare durable objects,edge computing,real-time api,nodejs,low latency



Similar Posts
Blog Image
Build Complete Event-Driven Architecture: Node.js, RabbitMQ, and TypeScript Guide

Learn to build scalable event-driven architecture with Node.js, RabbitMQ & TypeScript. Master message brokers, error handling & microservices communication.

Blog Image
Build Event-Driven Microservices: Complete NestJS, NATS, MongoDB Guide with Production Examples

Learn to build scalable event-driven microservices with NestJS, NATS, and MongoDB. Complete guide covering architecture, implementation, and deployment best practices.

Blog Image
Master Next.js 13+ App Router: Complete Server-Side Rendering Guide with React Server Components

Master Next.js 13+ App Router and React Server Components for SEO-friendly SSR apps. Learn data fetching, caching, and performance optimization strategies.

Blog Image
Complete Guide to Next.js Prisma ORM Integration: Build Type-Safe Full-Stack Applications

Learn how to integrate Next.js with Prisma ORM for type-safe, scalable web applications. Build better full-stack apps with seamless database operations today.

Blog Image
Complete Guide to Building Full-Stack Apps with Next.js and Prisma Integration in 2024

Learn how to integrate Next.js with Prisma for powerful full-stack development. Build type-safe applications with seamless database operations and API routes.

Blog Image
How to Build a Production-Ready API Gateway with Fastify and TypeScript

Learn how to create a secure, scalable API gateway using Fastify, TypeScript, and Consul for modern microservices architecture.