Kong (often called Kong API Gateway) is a tool that sits in front of your APIs and manages all incoming requests—kind of like a smart gatekeeper for APIs.
Simple explanation
Instead of clients calling your backend services directly, they go through Kong first:
Client → Kong → Your APIs
Kong decides:
- where the request goes
- whether it’s allowed
- how it should be handled
🔧 What Kong actually does
1. Routing (like Traefik, but API-focused)
- Routes requests to the correct backend service
- Supports paths, hosts, headers, etc.
Example:
/users → user-service/orders → order-service
2. Authentication & Security
- API keys
- OAuth2 / JWT
- Rate limiting (prevent abuse)
3. Plugins (this is Kong’s superpower)
Kong uses plugins to add features like:
- logging
- caching
- transformations
- analytics
4. Load balancing
- Distributes traffic across multiple service instances
5. Observability
- Logs requests
- Tracks usage
- Helps debug API issues
Kong vs Traefik
| Feature | Kong | Traefik |
|---|---|---|
| Focus | APIs | General web traffic |
| Plugins | Very powerful | More limited |
| Auth | Built-in strong | Basic |
| Use case | Microservices APIs | Containers & routing |
Quick takeaway:
- Traefik → routing + infrastructure
- Kong → API management + security
Where Kong fits in a system
Frontend / Mobile App ↓ Kong ↓ Microservices (Node, Python, etc.) ↓ Database
Example use case
Imagine you’re building an app with:
- user service
- payment service
- order service
Kong can:
- route requests to each service
- require authentication
- limit requests per user
- log all API calls
In DevOps terms
Kong is part of:
- API Gateway layer
- Often used with:
- Kubernetes
- Docker
In one sentence
Kong is an API gateway that controls, secures, and manages traffic to your backend services.
Here’s a working Kong Docker example you can compare directly with Traefik.
The cleanest starter setup is Kong Gateway in DB-less mode. In this mode, Kong runs without a database and reads its routes/services/plugins from a single declarative YAML file, which Kong documents as a supported deployment mode and a good fit for automation and CI/CD. (Kong Docs)
What you’ll build
Client → Kong → Your app
Kong will:
- listen on port 8000 for proxied API traffic
- expose an Admin API on port 8001 for local management/testing
- route
/apito your Node app - optionally apply plugins like rate limiting or key auth later
Kong’s Docker docs show Compose-based installs, and Kong’s gateway overview describes it as sitting in front of upstream services to control, analyze, and route requests. (Kong Docs)
Project structure
kong-starter/├── app/│ ├── package.json│ └── server.js├── kong/│ └── kong.yml├── Dockerfile└── compose.yml
1) app/package.json
{ "name": "kong-starter", "version": "1.0.0", "main": "server.js", "scripts": { "start": "node server.js" }}
2) app/server.js
const http = require("http");const PORT = process.env.PORT || 3000;const server = http.createServer((req, res) => { const body = { ok: true, message: "Hello from app behind Kong", method: req.method, url: req.url, host: req.headers.host, time: new Date().toISOString() }; res.writeHead(200, { "Content-Type": "application/json" }); res.end(JSON.stringify(body, null, 2));});server.listen(PORT, () => { console.log(`Server listening on ${PORT}`);});
3) Dockerfile
FROM node:20-alpineWORKDIR /appCOPY app/package.json ./RUN npm install --omit=devCOPY app/server.js ./ENV PORT=3000EXPOSE 3000CMD ["npm", "start"]
4) kong/kong.yml
This is the declarative config Kong loads in DB-less mode.
_format_version: "3.0"services: - name: app-service url: http://app:3000 routes: - name: app-route paths: - /api
This tells Kong:
- there is an upstream service at
http://app:3000 - requests hitting
/apishould be proxied there
Kong’s DB-less docs explain that entities are configured through a declarative YAML or JSON file when database=off. (Kong Docs)
5) compose.yml
services: kong: image: kong:3.10 environment: KONG_DATABASE: "off" KONG_DECLARATIVE_CONFIG: /kong/declarative/kong.yml KONG_PROXY_ACCESS_LOG: /dev/stdout KONG_ADMIN_ACCESS_LOG: /dev/stdout KONG_PROXY_ERROR_LOG: /dev/stderr KONG_ADMIN_ERROR_LOG: /dev/stderr KONG_ADMIN_LISTEN: 0.0.0.0:8001 ports: - "8000:8000" # proxy - "8001:8001" # admin api volumes: - ./kong/kong.yml:/kong/declarative/kong.yml:ro app: build: context: . dockerfile: Dockerfile
Kong’s Docker install docs support Docker Compose installs, and Kong’s read-only/DB-less docs show using database=off with a declarative config file passed into the container. (Kong Docs)
6) Run it
docker compose up -d --build
Then test it:
curl http://localhost:8000/api
You should get JSON back from your Node app.
You can also inspect Kong locally through the Admin API:
curl http://localhost:8001/services
One important note: in DB-less mode, Kong documents that you cannot use the Admin API to write configuration the normal way, because config comes from the declarative file instead. (Kong Docs)
7) Add rate limiting
One of Kong’s main strengths is plugins. Kong’s overview emphasizes its plugin-based approach for implementing API traffic policies. (Kong Docs)
Update kong/kong.yml like this:
_format_version: "3.0"services: - name: app-service url: http://app:3000 routes: - name: app-route paths: - /apiplugins: - name: rate-limiting config: minute: 5 policy: local
Then reload the stack:
docker compose up -d
Now Kong will rate-limit requests through the gateway.
8) Kong vs Traefik in this exact setup
Traefik version
You used labels on the app container:
- "traefik.http.routers.app.rule=Host(`app.localhost`)"
Traefik discovers Docker containers automatically and builds routing from labels. That is the core of its Docker provider model.
Kong version
You define a service and route in kong.yml:
services: - name: app-service url: http://app:3000 routes: - paths: - /api
So the practical difference is:
- Traefik feels more infrastructure-native and auto-discovery-driven
- Kong feels more API-platform-driven, with explicit services, routes, and plugins
Kong’s docs center services, routes, plugins, and deployment modes as the main model for managing API traffic. (Kong Docs)
9) When to use which
Use Traefik when you want:
- simple reverse proxying
- automatic Docker/Kubernetes discovery
- quick app routing
- built-in HTTPS for web apps
Use Kong when you want:
- API gateway features
- auth, rate limiting, transformations, analytics
- a plugin-heavy API management layer
- more explicit API governance
That’s an inference from how each product is documented: Traefik emphasizes reverse proxying and dynamic service discovery, while Kong emphasizes API traffic policies through plugins and gateway entities. (Kong Docs)
10) The easiest mental model
- Traefik = “send traffic to my containers”
- Kong = “manage and secure my APIs”
11) Resume-worthy project line
Built a containerized API service behind Kong Gateway in DB-less mode using declarative configuration for routing and traffic policy management.
Here’s the same Kong project, but now with API key auth + rate limiting — which is where Kong starts to feel very different from Traefik.
Kong’s Key Authentication plugin can require clients to send an API key in a header, query string, or request body, and Kong’s Rate Limiting plugin can throttle requests by time window. In DB-less mode, you define all of that declaratively in the config file Kong loads at startup. (Kong Docs)
What this version does
Requests to your app will:
- go through Kong on
http://localhost:8000 - require an API key
- be limited to 5 requests per minute
- route to your Node app on
/api
In Kong’s rate-limiting docs, if there is an auth layer, the plugin uses the authenticated Consumer for identifying clients; otherwise it falls back to client IP. (Kong Docs)
Updated kong/kong.yml
_format_version: "3.0"services: - name: app-service url: http://app:3000 routes: - name: app-route paths: - /apiplugins: - name: key-auth service: app-service config: key_names: - apikey - name: rate-limiting service: app-service config: minute: 5 policy: localconsumers: - username: demo-client keyauth_credentials: - key: super-secret-demo-key
Why this works:
key-authprotects the service with API key authentication. (Kong Docs)key_names: [apikey]tells Kong to look for the API key under that name. Kong documents that keys can be supplied in headers, query params, or request body. (Kong Docs)rate-limitingenforces request quotas over periods like seconds, minutes, hours, and more. (Kong Docs)policy: localstores counters in-memory on the node; Kong notes this has minimal performance impact but is less accurate across multiple nodes. (Kong Docs)consumerspluskeyauth_credentialsgives the client an identity and an API key in DB-less declarative config. That fits Kong’s DB-less model where config is the source of truth. (Kong Docs)
compose.yml
You can keep the same Compose file structure as before:
services: kong: image: kong:3.10 environment: KONG_DATABASE: "off" KONG_DECLARATIVE_CONFIG: /kong/declarative/kong.yml KONG_PROXY_ACCESS_LOG: /dev/stdout KONG_ADMIN_ACCESS_LOG: /dev/stdout KONG_PROXY_ERROR_LOG: /dev/stderr KONG_ADMIN_ERROR_LOG: /dev/stderr KONG_ADMIN_LISTEN: 0.0.0.0:8001 ports: - "8000:8000" - "8001:8001" volumes: - ./kong/kong.yml:/kong/declarative/kong.yml:ro app: build: context: . dockerfile: Dockerfile
Kong’s Docker install docs support Compose installs, and DB-less deployments use KONG_DATABASE=off plus a declarative config file path. (Kong Docs)
Start it
docker compose up -d --build
Test without an API key
curl -i http://localhost:8000/api
This should fail because the route is protected by key-auth. Kong’s Key Auth plugin requires a valid key for access. (Kong Docs)
Test with the API key
Send the key in the apikey header:
curl -i \ -H "apikey: super-secret-demo-key" \ http://localhost:8000/api
That should succeed.
You can also pass the key as a query string because Kong’s Key Auth plugin supports query string auth too. (Kong Docs)
curl -i "http://localhost:8000/api?apikey=super-secret-demo-key"
Test the rate limit
Run this several times quickly:
for i in {1..6}; do curl -s -o /dev/null -w "%{http_code}\n" \ -H "apikey: super-secret-demo-key" \ http://localhost:8000/apidone
You should see the first few succeed and then a 429 once you exceed the per-minute limit. Kong’s rate-limiting plugin is designed to cap requests over configured windows like minute: 5. (Kong Docs)
Why this is more “API gateway” than reverse proxy
With Traefik, the main idea was: “route traffic to the right service.” With this Kong setup, the gateway is also enforcing who can call the API and how often they can call it. Kong’s docs frame plugins like Key Auth and Rate Limiting as first-class traffic policy features for services and routes. (Kong Docs)
A practical mental model
- Traefik: “Send requests to the right app.”
- Kong: “Control access to the API, then send requests to the app.”
That is an inference from their documented feature emphasis: Traefik centers dynamic routing and service discovery, while Kong centers API traffic policy through gateway entities and plugins. (Kong Docs)
Good next upgrades
The next Kong features that are most worth learning are:
- JWT auth
- request/response transformation
- ACLs by consumer group
- logging plugins
- declarative config managed from Git
Those all build naturally on Kong’s plugin model and DB-less configuration workflow. (Kong Docs)