js

How to Build a Secure and Scalable API Gateway with Express and Kong

Learn to combine Express and Kong to create a powerful, secure API gateway that simplifies authentication, routing, and rate limiting.

How to Build a Secure and Scalable API Gateway with Express and Kong

I was building a new service last week when I hit a familiar wall. My authentication logic was copied across three different Node.js services. Rate limiting was an afterthought, added inconsistently. When I needed to change how users accessed data, I had to update every service individually. That’s when I decided to stop patching holes and build a proper entry point. This isn’t just about writing code; it’s about creating a front door for your entire application that’s secure, fast, and easy to manage. Let me show you how to build that door.

Think of an API Gateway as the reception desk for your digital office. Every request comes here first. It checks IDs, directs traffic, and makes sure no one person hogs all the time. Without it, each department (or microservice) needs its own security guard and traffic controller. That’s messy and hard to keep consistent.

Why combine Express and Kong? Express gives you complete control over your business logic in a language you already know. Kong handles the heavy lifting of routing and traffic management at a scale Express alone might struggle with. Together, they cover everything from validating a user’s token to making sure your database doesn’t get overwhelmed.

Ready to build? You’ll need a few tools installed. Make sure you have Node.js (version 18 or higher) and Docker running on your machine. Docker will help us run Kong and a database without complex local setup.

First, let’s create our project home and install the essentials.

mkdir api-gateway-project && cd api-gateway-project
npm init -y
npm install express jsonwebtoken bcryptjs
npm install dotenv helmet cors ioredis

Now, let’s build the heart of our system: the authentication service. This is where we’ll issue and validate security tokens. We’ll use JSON Web Tokens (JWT), but with a crucial twist—refresh tokens for better security.

Here’s a basic structure for our token service. Notice how it stores refresh tokens in Redis? This lets us invalidate them if needed, which you can’t do with JWT alone.

// services/auth/tokenService.js
const jwt = require('jsonwebtoken');
const crypto = require('crypto');
const Redis = require('ioredis');

class TokenService {
  constructor() {
    this.redis = new Redis(process.env.REDIS_URL);
    this.accessSecret = process.env.JWT_ACCESS_SECRET;
    this.refreshSecret = process.env.JWT_REFRESH_SECRET;
  }

  async generateTokens(userId, userRole) {
    const accessToken = jwt.sign(
      { sub: userId, role: userRole, type: 'access' },
      this.accessSecret,
      { expiresIn: '15m' }
    );

    const refreshTokenId = crypto.randomBytes(16).toString('hex');
    const refreshToken = jwt.sign(
      { sub: userId, jti: refreshTokenId, type: 'refresh' },
      this.refreshSecret,
      { expiresIn: '7d' }
    );

    // Store the refresh token ID in Redis with an expiry
    await this.redis.setex(`refresh:${refreshTokenId}`, 604800, userId);

    return { accessToken, refreshToken };
  }
}

Have you ever wondered what stops someone from just using a stolen token forever? The short-lived access token combined with a revocable refresh token is our answer. The access token works for 15 minutes. To get a new one, you need a valid refresh token, which we can track and revoke.

Now, let’s get Kong into the mix. Kong is a gateway that sits in front of our Express services. We’ll run it using Docker. Create a docker-compose.yml file.

version: '3.8'
services:
  kong:
    image: kong:3.4
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: kong
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
    ports:
      - "8000:8000"   # Proxy port for API traffic
      - "8443:8443"   # Proxy SSL port
      - "8001:8001"   # Admin API port
    depends_on:
      - kong-database
    networks:
      - kong-net

  kong-database:
    image: postgres:13
    environment:
      POSTGRES_USER: kong
      POSTGRES_DB: kong
      POSTGRES_PASSWORD: kong
    volumes:
      - kong-data:/var/lib/postgresql/data
    networks:
      - kong-net

networks:
  kong-net:
    driver: bridge

volumes:
  kong-data:

Run docker-compose up -d to start Kong and its database. Kong is now listening on port 8000 for your API traffic and port 8001 for configuration.

With Kong running, we need to tell it about our services. Let’s say we have an Express user service running on port 3001. We register it with Kong’s Admin API.

# Create a Kong Service (points to our backend)
curl -i -X POST http://localhost:8001/services \
  --data name=user-service \
  --data url='http://host.docker.internal:3001'

# Create a Route for the service
curl -i -X POST http://localhost:8001/services/user-service/routes \
  --data paths[]=/users \
  --data name=user-route

Now, any request to http://localhost:8000/users will be forwarded to our Express app. But we’re still letting everyone in. Let’s add a guard. We’ll use a Kong plugin to check for a valid JWT on incoming requests.

# Enable the JWT plugin on our user-service route
curl -X POST http://localhost:8001/services/user-service/plugins \
  --data name=jwt

This simple command adds a powerful check. Kong will now reject any request to /users that doesn’t have a proper Authorization: Bearer <token> header. But how does Kong know if the token is valid? We need to give it a secret key to verify the signature. We do this by creating a “Consumer” in Kong and a JWT credential.

# Create a Consumer (represents a user or app)
curl -X POST http://localhost:8001/consumers \
  --data username=api_client_1

# Add a JWT credential for this consumer
curl -X POST http://localhost:8001/consumers/api_client_1/jwt \
  -F algorithm=HS256 \
  -F secret=my_super_secret_key_here

The secret you provide here must match the one used by your Express auth service to sign the tokens. Kong will use it to verify the token’s signature and expiry. It’s a clean separation: Express creates the token, Kong validates it before the request even hits your business logic.

What about controlling how often someone can call your API? This is where rate limiting saves you from accidental overloads or bad actors. Let’s add a rate-limiting plugin to our route.

# Add rate limiting: 100 requests per minute per consumer
curl -X POST http://localhost:8001/services/user-service/plugins \
  --data name=rate-limiting \
  --data config.minute=100 \
  --data config.policy=local

Kong will now track requests per consumer (identified by their JWT) and return a 429 Too Many Requests response if they exceed the limit. The local policy means Kong stores the counters in memory, which is fast. For a distributed setup across multiple Kong nodes, you’d use redis.

But we can go further. What if you want to give admin users a higher limit than regular users? This is where custom logic in Express and communication with Kong’s Admin API can create dynamic rules. Imagine your Express auth service, when issuing a token, also tags the Kong consumer with a custom ID based on their plan.

Our Express app can now focus on what it does best: business logic. Here’s a simple user profile endpoint, secure in the knowledge that Kong has already validated the caller’s identity.

// services/user-service/app.js
const express = require('express');
const app = express();

app.get('/profile', (req, res) => {
  // Kong passes verified JWT claims in headers
  const userId = req.headers['x-consumer-id'];
  const userRole = req.headers['x-consumer-username'];

  // Fetch user-specific data from your database
  const userData = { id: userId, name: 'Jane Doe', role: userRole };
  res.json(userData);
});

app.listen(3001, () => console.log('User service on port 3001'));

See the x-consumer-id and x-consumer-username headers? Kong injects these after verifying the JWT, so your service knows exactly who is making the request without parsing the token again. It’s a clean handoff.

This setup forms a solid foundation. You have a dedicated service for authentication, a gateway enforcing security and limits, and your business services free to focus on their jobs. From here, you can add more: caching responses with Kong, aggregating data from multiple services, or setting up circuit breakers to prevent a failing service from taking down the gateway.

Building this changed how I deploy applications. It turns a collection of independent endpoints into a managed, observable, and secure API product. The initial setup takes time, but the consistency and control it provides are worth it. You stop worrying about basic security in every route and start thinking about delivering features.

What was the last bottleneck in your API that could have been solved at the gateway? I’d love to hear about your experiences. If this guide helped you see the structure behind the chaos, please share it with another developer who might be facing the same wall I did. Drop a comment below if you have questions or want to dive deeper into any specific piece.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Keywords: api gateway,express js,kong api,jwt authentication,nodejs microservices



Similar Posts
Blog Image
Complete Guide to Integrating Svelte with Supabase for Modern Full-Stack Web Applications

Learn how to integrate Svelte with Supabase for modern web apps. Build reactive frontends with real-time data, authentication, and PostgreSQL backend. Start now!

Blog Image
Advanced Redis Caching Strategies for Node.js: Memory to Distributed Cache Implementation Guide

Master advanced Redis caching with Node.js: multi-layer architecture, distributed patterns, clustering & performance optimization. Build enterprise-grade cache systems today!

Blog Image
Building Type-Safe Event-Driven Microservices with NestJS, RabbitMQ, and Prisma: Complete Tutorial

Learn to build type-safe event-driven microservices with NestJS, RabbitMQ & Prisma. Complete tutorial with error handling & monitoring. Start building now!

Blog Image
How to Build Full-Stack TypeScript Apps: Complete Next.js and Prisma ORM Integration Guide

Learn how to integrate Next.js with Prisma ORM for type-safe full-stack development. Build modern web apps with seamless database operations and TypeScript support.

Blog Image
Event-Driven Microservices: Complete NestJS, RabbitMQ, MongoDB Guide with Real-World Examples

Learn to build scalable event-driven microservices with NestJS, RabbitMQ & MongoDB. Master async communication, CQRS patterns & error handling for distributed systems.

Blog Image
Build Multi-Tenant SaaS with NestJS, Prisma, and PostgreSQL Row-Level Security

Learn to build secure multi-tenant SaaS apps with NestJS, Prisma & PostgreSQL RLS. Complete guide with tenant isolation, auth, and best practices. Start building today!