Kong Gateway

In Kong Gateway, the Service and Route are core entities that determine how external requests are proxied to your upstream APIs or services.

Here’s how they work together:


1. Service (What you proxy to)

A Service in Kong represents an upstream API or backend application you want to expose via Kong.

Example:

{
  "name": "example-service",
  "url": "http://my-upstream-api:8080"
}

  • This defines where Kong should send requests once they are matched.
  • It includes details like host, port, path, or protocol.

2. Route (How requests are matched)

A Route defines the rules that determine which requests should be forwarded to a Service.

Example:

{
  "paths": ["/api/v1"],
  "methods": ["GET", "POST"],
  "service": { "id": "example-service-id" }
}

  • It maps incoming requests (based on path, method, host, headers, etc.) to a specific Service.
  • One service can have multiple routes (e.g., /v1/users, /v1/orders, etc.).

The Flow

Client --> Kong Gateway --> [Route matched] --> [Service invoked] --> Upstream API

  1. Kong receives a request.
  2. It checks the Route rules (e.g., path, host).
  3. If a match is found, it forwards the request to the associated Service.
  4. The upstream service processes it and returns the response via Kong.

Plugins

You can attach plugins (like rate-limiting, auth, logging) at:

  • Global level (applies to all requests)
  • Service level (applies to all routes of a service)
  • Route level (applies only to that route)

Quick Example via curl

# Create a service
curl -i -X POST http://localhost:8001/services \
  --data name=example-service \
  --data url=http://httpbin.org

# Create a route to that service
curl -i -X POST http://localhost:8001/services/example-service/routes \
  --data paths[]=/test

Now, accessing http://localhost:8000/test will proxy to http://httpbin.org.


Example 2:

Absolutely. Let’s break down how Services and Routes work in Kong Gateway, with a clear explanation, flow, and examples.


πŸ”Ή What Is a Service in Kong?

A Service in Kong Gateway represents an upstream API or microservice. This is the destination Kong forwards requests to. You define a service with details like the host, port, and protocol (HTTP, HTTPS, TCP, etc.).

Example:

{
  "name": "user-service",
  "url": "http://user-api.internal:8080"
}

Or more explicitly:

{
  "name": "user-service",
  "host": "user-api.internal",
  "port": 8080,
  "protocol": "http"
}


πŸ”Έ What Is a Route in Kong?

A Route defines how incoming client requests are mapped to a Service. Routes define rules and match criteria like:

  • Path (e.g., /users)
  • HTTP method (e.g., GET, POST)
  • Host (e.g., api.myapp.com)
  • Headers
  • SNI (for SSL)
  • TCP/UDP port

Example:

{
  "paths": ["/users"],
  "methods": ["GET", "POST"],
  "service": { "id": "uuid-of-user-service" }
}


πŸ”„ How Do They Work Together?

  1. A client sends a request to Kong (e.g., GET https://api.myapp.com/users).
  2. Kong matches the request to a Route based on the defined criteria.
  3. The Route is linked to a Service.
  4. Kong forwards the request to the upstream Service.
  5. The response is returned to the client.

πŸ” Flow Diagram (Simplified)

Client
  |
  v
[Request: GET /users]
  |
  v
[Kong Gateway]
  |
  β”œβ”€β–Ί Match Route: "/users" β†’ user-service
  |
  └─► Forward to: http://user-api.internal:8080
              |
              v
      [Upstream Service Response]
              |
              v
          [Kong Gateway]
              |
              v
            Client


πŸ§ͺ Practical Example Using curl

1. Create Service

curl -i -X POST http://localhost:8001/services \
  --data name=user-service \
  --data url='http://user-api.internal:8080'

2. Create Route

curl -i -X POST http://localhost:8001/services/user-service/routes \
  --data paths[]='/users'

Now, when you call:

curl http://localhost:8000/users

Kong will route it to:

http://user-api.internal:8080


Advanced Matching Capabilities

Routes can also match based on:

  • hosts (virtual hosts like api.myapp.com)
  • methods (GET, POST, etc.)
  • headers (e.g., X-Version)
  • snis (for SSL/TLS)
  • sources/destinations (for TCP streams)

Best Practices

  • Keep one route per API path unless multi-match is needed.
  • Use tags to categorize services/routes for environments.
  • Apply plugins at service or route level depending on scope:
    • Service-level plugin: applies to all routes to that service.
    • Route-level plugin: applies only to that specific route.

Would you like a visual diagram in PNG or SVG format, or a Postman collection to test this setup?

Kong Logs

Why Only IP is Logged for Upstream

  1. Upstream Targets Are Resolved to IPs
    • When you define an upstream with a hostname and add targets, Kong internally resolves the hostname to an IP using DNS and then proxies to that IP.
    • Kong logs the resolved IP address, not the hostname, to provide the actual connection endpoint for traceability.
  2. Performance and Audit Reasons
    • Logging IPs ensures consistency across distributed deployments, since DNS entries may change over time.
    • IPs provide exact traceability in debugging and auditing upstream behavior.
  3. DNS Caching and TTL
    • Kong caches DNS results. Once a hostname is resolved to an IP, that IP is used and logged during the TTL window.
  4. No Custom Log Format or Plugin in Use
    • By default, Kong does not resolve or log upstream hostnames unless a custom plugin or log transformer is used.

Deploy routes and plugins in Kong Gateway

Here’s a minimal yet flexible Ansible playbook to deploy routes in Kong Gateway using the uri module to interact with the Admin API. This example assumes:

  • Kong Admin API is accessible (e.g. http://localhost:8001).
  • A service is already defined (or you can create one using Ansible).
  • You’re deploying a route for an existing service.

βœ… Example Directory Layout

kong_routes/
β”œβ”€β”€ playbook.yml
└── vars/
    └── routes.yml


πŸ“„ vars/routes.yml

kong_admin_url: "http://localhost:8001"

kong_routes:
  - name: route1
    service: example-service
    paths: ["/example"]
    methods: ["GET"]
    strip_path: true


πŸ“„ playbook.yml

- name: Deploy Routes to Kong
  hosts: localhost
  connection: local
  vars_files:
    - vars/routes.yml
  tasks:
    - name: Ensure route exists in Kong
      uri:
        url: "{{ kong_admin_url }}/routes"
        method: POST
        body_format: json
        headers:
          Content-Type: "application/json"
        body: >-
          {
            "name": "{{ item.name }}",
            "paths": {{ item.paths | to_json }},
            "methods": {{ item.methods | to_json }},
            "strip_path": {{ item.strip_path }},
            "service": { "name": "{{ item.service }}" }
          }
        status_code: 201
      loop: "{{ kong_routes }}"
      ignore_errors: true  # optional: prevents failure if route exists


βœ… Optional: Create Service First

Add this before the route task if needed:

    - name: Ensure service exists
      uri:
        url: "{{ kong_admin_url }}/services"
        method: POST
        body_format: json
        headers:
          Content-Type: "application/json"
        body: >-
          {
            "name": "{{ item.service }}",
            "url": "http://mockbin.org"  # change to your backend URL
          }
        status_code: 201
      loop: "{{ kong_routes }}"
      ignore_errors: true


Would you like this to support idempotency (check if the route already exists and update it), or include support for Kong Konnect or authentication headers (e.g. RBAC tokens)?

headers:
Content-Type: “application/json”
Authorization: “Bearer {{ kong_admin_token }}”

A 401 Unauthorized error in Kong Gateway typically indicates that the request lacks valid authentication credentials required to access the resource. Here’s how you can troubleshoot and resolve this issue:


πŸ” Common Causes and Solutions

1. Missing or Invalid API Key (Key Authentication Plugin)

If you’re using the Key Authentication plugin, Kong expects a valid API key in the request.(Kong Docs)

  • Symptom: Receiving a 401 error with the message "No API key found in request" or "Invalid authentication credentials".(Turbolab Technologies)
  • Solution:
    • Create a Consumer: curl -i -X POST http://localhost:8001/consumers/ --data "username=your_username"
    • Assign an API Key to the Consumer: curl -i -X POST http://localhost:8001/consumers/your_username/key-auth --data "key=your_api_key"
    • Include the API Key in Your Request: curl -i http://lo calhost:8000/your_service \ -H "apikey: your_api_key"
    • Note: The header name (apikey) should match the key_names configured in your Key Authentication plugin.(Kong Docs)

2. Expired or Invalid JWT Token (JWT Plugin)

If you’re using the JWT plugin, ensure that the token is valid and not expired.

  • Symptom: Receiving a 401 error with the message "Invalid token" or "Token expired".
  • Solution:
    • Verify the Token: Use tools like jwt.io to decode and inspect the token.
    • Check Expiration: Ensure the exp claim has not passed.
    • Validate Signature: Confirm that the token’s signature matches the expected signing key.(Auth0 Community)

3. OpenID Connect (OIDC) Plugin Issues

When using the OIDC plugin, a 401 error can occur if the plugin’s cache is outdated after changes to the Identity Provider (IdP).(support.konghq.com)

  • Symptom: Receiving a 401 error after updating the IdP’s signing certificate.
  • Solution:
    • Clear Plugin Cache: Restart Kong or clear the plugin’s cache to fetch the updated signing keys from the IdP.
    • Verify IdP Configuration: Ensure that the IdP’s metadata URL is correctly configured in the plugin settings.

4. Plugin Configuration on Specific Routes or Services

Ensure that authentication plugins are correctly configured on the intended routes or services.

  • Symptom: Receiving a 401 error on specific endpoints.
  • Solution:
    • Check Plugin Association: Use the Admin API to list plugins associated with your routes or services. curl http://localhost:8001/plugins
    • Adjust Configuration: Ensure that the plugin is enabled and correctly configured for the desired routes or services.

πŸ› οΈ Additional Troubleshooting Steps

  • Inspect Kong Logs: Check the Kong error logs for detailed information.
  tail -f /usr/local/kong/logs/error.log

  • Use Debug Headers: Add the X-Kong-Debug: 1 header to your requests to get more insight into how Kong processes them.
  • Verify Plugin Order: Ensure that authentication plugins are executed in the correct order, especially when multiple plugins are used.

If you can provide more details about your setupβ€”such as the authentication plugins in use, specific routes or services affected, and sample requestsβ€”I can offer more targeted assistance.

Deploy services in KONG

To export a Kong service with its route(s) and plugin(s) and re-deploy it via Ansible, follow this structured approach:


Step 1: Export Kong Configuration (Service + Route + Plugins)

Use Kong’s Admin API or deck (Kong’s declarative configuration tool).

Option A: Use deck

deck dump –select-tag my-service-tag –output-file exported.yaml

πŸ”Ή Tagging helps isolate what to export (tag services, routes, plugins with my-service-tag beforehand).
πŸ”Ή You can also export the full config with deck dump.

Option B: Use cURL (for manual/scripted extraction)

# Get service

curl -s http://<KONG_ADMIN&gt;:8001/services/my-service > service.json

# Get route(s) for the service

curl -s http://<KONG_ADMIN&gt;:8001/services/my-service/routes > routes.json

# Get plugins for the service

curl -s http://<KONG_ADMIN&gt;:8001/services/my-service/plugins > plugins.json


 Step 2: Transform into Ansible-compatible format

Create a folder structure:

kong/

β”œβ”€β”€ files/

β”‚   └── exported.yaml  # if using deck

β”œβ”€β”€ templates/

β”‚   └── service.json.j2

β”‚   └── routes.json.j2

β”‚   └── plugins.json.j2

└── tasks/

    └── deploy-kong.yml


βœ… Step 3: Create Ansible Playbook

If using deck:

# tasks/deploy-kong.yml

– name: Deploy config to Kong via deck

  ansible.builtin.command:

    cmd: “deck sync –state exported.yaml”

  args:

    chdir: “{{ playbook_dir }}/files”

If using cURL approach (raw API):

- name: Create service in Kong
  uri:
    url: "http://{{ kong_admin_host }}:8001/services"
    method: POST
    body: "{{ lookup('file', 'templates/service.json.j2') | from_json }}"
    body_format: json
    status_code: [201, 409]  # 409 = already exists
    headers:
      Content-Type: "application/json"

- name: Create route for the service
  uri:
    url: "http://{{ kong_admin_host }}:8001/services/{{ service_name }}/routes"
    method: POST
    body: "{{ lookup('file', 'templates/routes.json.j2') | from_json }}"
    body_format: json
    status_code: [201, 409]
    headers:
      Content-Type: "application/json"

- name: Add plugins to service
  uri:
    url: "http://{{ kong_admin_host }}:8001/services/{{ service_name }}/plugins"
    method: POST
    body: "{{ lookup('file', 'templates/plugins.json.j2') | from_json }}"
    body_format: json
    status_code: [201, 409]
    headers:
      Content-Type: "application/json"

Β Β Β 

Step 4: Run Playbook

ansible-playbook deploy-kong.yml -e “kong_admin_host=your.kong.gateway”


 Optional: Use Tags to Scope with deck

Tag services during creation to support selective dumps later:

curl -X PATCH http://localhost:8001/services/my-service \

  –data “tags=my-service-tag”

example of j2

{
  "name": "my-service",
  "host": "example.internal",
  "port": 8080,
  "protocol": "http",
  "path": "/api",
  "connect_timeout": 60000,
  "write_timeout": 60000,
  "read_timeout": 60000,
  "retries": 5,
  "tags": ["exported", "my-tag"]
}

Full ansible playbook 

---
- name: Deploy Kong service with routes and plugins
  hosts: kong
  gather_facts: no
  vars:
    kong_admin_url: "{{ kong_admin_host | default('http://localhost:8001') }}"
  tasks:

    - name: Load service JSON
      set_fact:
        service_payload: "{{ lookup('file', 'templates/service.json.j2') | from_json }}"

    - name: Create service
      uri:
        url: "{{ kong_admin_url }}/services"
        method: POST
        body: "{{ service_payload }}"
        body_format: json
        status_code: [201, 409]
        headers:
          Content-Type: "application/json"

    - name: Load routes JSON (could be a single or multiple routes)
      set_fact:
        routes_payload: "{{ lookup('file', 'templates/routes.json.j2') | from_json }}"

    - name: Create route(s) for the service
      uri:
        url: "{{ kong_admin_url }}/services/{{ service_payload.name }}/routes"
        method: POST
        body: "{{ item }}"
        body_format: json
        status_code: [201, 409]
        headers:
          Content-Type: "application/json"
      loop: "{{ routes_payload | type_debug == 'list' | ternary(routes_payload, [routes_payload]) }}"

    - name: Load plugins JSON
      set_fact:
        plugins_payload: "{{ lookup('file', 'templates/plugins.json.j2') | from_json }}"

    - name: Add plugin(s) to the service
      uri:
        url: "{{ kong_admin_url }}/services/{{ service_payload.name }}/plugins"
        method: POST
        body: "{{ item }}"
        body_format: json
        status_code: [201, 409]
        headers:
          Content-Type: "application/json"
      loop: "{{ plugins_payload }}"

Deploy AWX (Rancher )

If you turn on (start/deploy) cattle-system pods (especially cattle-cluster-agent) and it crashes your cluster,
it means the Rancher agents are broken or misconfigured and are overloading, blocking, or breaking Kubernetes internally.


In detail, here’s why this can happen:

CauseWhat HappensWhy It Crushes the Cluster
❌ Rancher agents fail to connect and keep retryingThey flood the Kubernetes API server with reconnect attemptsAPI server gets overloaded, becomes unresponsive
❌ Wrong Rancher URL or network brokenAgents enter infinite loops trying to reach RancherNode CPU/memory gets exhausted
❌ Authentication errors (cert expired, token invalid)Agents spam auth failures on kube-apiserverAPI server becomes slow or hangs
❌ Agent version mismatchOlder agents send bad requests to newer serversAPI server rejects them violently
❌ Cluster registration issues (wrong cluster state in Rancher DB)Rancher tries to sync invalid resourcesetcd or kube-apiserver crash
❌ cattle-node-agent overloadEach node spawns bad agents, DaemonSet restarts infinitelykubelet, container runtime get overloaded

🚨 Important:

  • Rancher’s cattle-cluster-agent talks both to your Kubernetes API server and to Rancher’s API.
  • If it is broken (wrong URL, invalid token, expired cert, bad Rancher setup), it spams the cluster.
  • Symptoms you usually see:
    • kubectl get nodes hangs
    • kubectl get pods -A freezes
    • CPU of kube-apiserver jumps high
    • Cluster nodes become NotReady
    • etcd warnings appear if etcd is running locally

βœ… Proof: If you kill (scale to 0) the cattle-cluster-agent, the cluster usually becomes healthy again.
Example:

kubectl scale deployment cattle-cluster-agent -n cattle-system --replicas=0

β†’ Cluster becomes stable.


πŸ’¬ Summary in simple words:

Rancher agents are deeply tied into your Kubernetes cluster.
If they are broken, they spam your Kubernetes system like crazy, and that overloads and crashes your cluster.


πŸ”Ž Next step: To solve it properly:

  1. Collect logs: kubectl logs -n cattle-system cattle-cluster-agent-xxxx kubectl logs -n cattle-system cattle-node-agent-xxxx
  2. Fix whatever is wrong (wrong URL, expired cert, wrong Rancher token, network broken).
  3. Then redeploy cattle-system carefully.

πŸ‘‰ Question for you to proceed:

  • Did you recently upgrade Rancher or move Rancher server (IP change, domain change)?
  • Did you renew SSL/TLS certificates on Rancher?
  • Or is this a new cluster registration you are trying?

Depending on your answer, I will tell you exactly how to fix it.
Would you like me to guide you step-by-step? πŸš€
(If yes, tell me Rancher version too β€” 2.6.x, 2.7.x, 2.8.x, etc.)

This is a serious architecture conflict:

Rancher’s cattle-node-agent is destabilizing your AWX deployment.

And the reason is resource exhaustion or cluster network/messaging overload.


πŸ”₯ Why does this happen?

CauseDescriptionWhy it breaks AWX
Resource exhaustionRancher’s cattle-node-agent is heavy (especially when reconnecting or retrying). It uses a lot of CPU, memory, and bandwidth.AWX Postgres and Web pods starve for CPU/memory or lose network
Node pressureWhen 2 cattle-node-agents come alive, node CPU/mem pressure increases.Kubernetes starts evicting pods (including AWX pods)
Pod preemptionIf no resource limits are properly set, kube-scheduler kills “low-priority” pods to save critical pods.Your AWX web or postgres pod might get evicted
Flapping networkingRancher’s agents may try to reach Rancher server and overload K8s networking.AWX needs stable DNS, API server access; it breaks
PersistentVolume interruptionsIf Rancher agents cause node instability, your Postgres PVCs (storage) might get stuck or remount improperly.AWX relies heavily on Postgres; database disruptions kill it

🧠 Let’s go deep technical quickly:

If you run:

kubectl get events -n awx --sort-by='.metadata.creationTimestamp'

or

kubectl describe pod <awx-pod-name> -n awx

you will likely see:

  • Evicted messages
  • NodeAffinity errors
  • OutOfcpu
  • OutOfmemory
  • Pod sandbox changed
  • Volume mount errors
  • Postgres connection refused or file corruption

πŸ›  How to properly fix this

βœ… Step 1: Check Node Resources

Look at:

kubectl describe node <node-name>

Check for:

  • Allocatable CPU/memory
  • Current Usage
  • Conditions like MemoryPressure, DiskPressure

βœ… Step 2: Set Strong Resource Requests & Limits for AWX

In your AWX YAML (awx.yaml), add hard limits like this:

yamlCopyEditspec:
  web_resource_requirements:
    requests:
      memory: 1Gi
      cpu: 500m
    limits:
      memory: 2Gi
      cpu: 1000m
  task_resource_requirements:
    requests:
      memory: 1Gi
      cpu: 500m
    limits:
      memory: 2Gi
      cpu: 1000m
  postgres_resource_requirements:
    requests:
      memory: 1Gi
      cpu: 500m
    limits:
      memory: 2Gi
      cpu: 1000m

βœ… Step 3: Pin AWX pods to 1 specific node (optional)

If your nodes are unstable, you can force AWX to run only on the “good” node:

spec:
node_selector:
node-role.kubernetes.io/awx: "true"

Or use taints and tolerations.

βœ… Step 4: Fix Rancher agents

If your Rancher agents (cattle-node-agent) are going crazy (e.g., retrying, error loops):

  • Make sure nodes can reach Rancher server stably
  • Update the registration URL if needed
  • Fix certificate errors

You can check the cattle-node-agent logs:

kubectl logs -n cattle-system cattle-node-agent-xxxxx

βœ… Step 5: Stabilize PVCs

If AWX Postgres uses PVC (Persistent Volume Claim), ensure that:

  • Storage class is fast and resilient
  • Volumes aren’t remounting during node pressure

If you use Longhorn or local-path, make sure your storage is attached and healthy.


⚑ Quick Summary

SymptomRoot CauseFix
AWX web and Postgres pods crash when Rancher agent wakes upNode pressure, network overload, volume flappingSet AWX hard resource requests/limits, stabilize Rancher agent, pin critical pods to best nodes, ensure fast storage

✍️ What I Recommend for You Right Now

  1. Look at AWX pods: kubectl get pods -n awx kubectl describe pod <awx-pod> -n awx kubectl get events -n awx --sort-by='.metadata.creationTimestamp'
  2. Look at Node status: bashCopyEditkubectl describe node <node-name>
  3. Look at Rancher cattle-node-agent logs: bashCopyEditkubectl logs -n cattle-system cattle-node-agent-xxxx

Paste me (if you want) the errors and I can tell you exactly what’s killing AWX.
You’re very close to solving it. πŸš€


❓ Quick Questions

create a better AWX deployment YAML that sets:

  • Node Affinity
  • Hard CPU/memory limits
  • Stable PVC
  • Tolerations (if needed)

Troubleshooting

kubectl get secret awx-admin-password -n awx -o jsonpath=”{.data.password}” | base64 –decode

install license

#!/bin/bash

# === CONFIGURATION ===
LICENSE_FILE="/path/to/your/maprlicense.txt"
PRIMARY_NODE="cldb-node.example.com"   # change to your CLDB node hostname or IP

# === STEP 1: Copy license to the primary node ===
echo "Copying license file to $PRIMARY_NODE..."
scp "$LICENSE_FILE" "$PRIMARY_NODE:/tmp/maprlicense.txt"

# === STEP 2: Add the license on the cluster ===
echo "Adding license using maprcli on $PRIMARY_NODE..."
ssh "$PRIMARY_NODE" "sudo maprcli license add -license /tmp/maprlicense.txt"

# === STEP 3: Verify license ===
echo "Verifying license status..."
ssh "$PRIMARY_NODE" "sudo maprcli license list -json | jq ."

echo "βœ… License added and verified successfully!"


How Cluster ID is normally generated
When you first install MapR, you run something like:


sudo /opt/mapr/server/configure.sh -C <cldb-nodes> -Z <zookeeper-nodes> -N <cluster-name>
When configure.sh runs for the first time:

It creates /opt/mapr/conf/clusterid

It creates /opt/mapr/conf/mapr-clusters.conf

It registers your node with CLDB

The Cluster ID is a random large number, created automatically.

If you need to (re)generate a Cluster ID manually:
If you're setting up a new cluster, and no CLDB is initialized yet, you can force generate a Cluster ID like this:

Stop Warden if running:


sudo systemctl stop mapr-warden
Clean old config (careful, if cluster already had data, don't do this):


sudo rm -rf /opt/mapr/conf/clusterid
sudo rm -rf /opt/mapr/conf/mapr-clusters.conf
Re-run configure.sh:

Example:


sudo /opt/mapr/server/configure.sh -N mycluster.example.com -C cldb-node1,cldb-node2 -Z zk-node1,zk-node2
-N: Cluster Name

-C: CLDB nodes

-Z: ZooKeeper nodes

After that:
cat /opt/mapr/conf/clusterid
β†’ You will now see the new Cluster ID!
How Cluster ID is normally generated
When you first install MapR, you run something like:

sudo /opt/mapr/server/configure.sh -C <cldb-nodes> -Z <zookeeper-nodes> -N <cluster-name>
When configure.sh runs for the first time:

It creates /opt/mapr/conf/clusterid

It creates /opt/mapr/conf/mapr-clusters.conf

It registers your node with CLDB

βœ… The Cluster ID is a random large number, created automatically.

πŸ”₯ If you need to (re)generate a Cluster ID manually:
If you're setting up a new cluster, and no CLDB is initialized yet, you can force generate a Cluster ID like this:

Stop Warden if running:

bash
Copy
Edit
sudo systemctl stop mapr-warden
Clean old config (careful, if cluster already had data, don't do this):

bash
Copy
Edit
sudo rm -rf /opt/mapr/conf/clusterid
sudo rm -rf /opt/mapr/conf/mapr-clusters.conf
Re-run configure.sh:

Example:

bash
Copy
Edit
sudo /opt/mapr/server/configure.sh -N mycluster.example.com -C cldb-node1,cldb-node2 -Z zk-node1,zk-node2
-N: Cluster Name

-C: CLDB nodes

-Z: ZooKeeper nodes

After that:

bash
Copy
Edit
cat /opt/mapr/conf/clusterid
β†’ You will now see the new Cluster ID!

Kong Konnect vs Kong Gateway

AspectKong Gateway (self-hosted)Kong Konnect (cloud SaaS)
DeploymentYou install and manage Kong yourself (VMs, Kubernetes, bare metal)Kong hosts the control plane; you run minimal “Data Planes”
UpgradesYou upgrade Kong manuallyKong handles upgrades for control plane automatically
ScalingYou manage scaling (HA, clustering)Kong auto-scales the Control Plane, you scale only Data Planes
SecurityYou manage certificates, patching, hardeningKong handles security for the control plane, you secure data planes
AnalyticsOptional, via your own Prometheus/Grafana, or Enterprise Edition AnalyticsBuilt-in with Konnect β€” real-time metrics, usage, dashboards
Admin GUIKong Manager on your VMCloud UI in Konnect (always updated)
Developer PortalHost and manage it yourselfCloud-hosted Developer Portal included
RBAC / Single Sign-On (SSO)Enterprise feature; you configure LDAP, OIDC yourselfNative SSO, multi-organization support
PricingLicense + cost of infrastructure + admin workSubscription pricing (includes hosting + support)
ReliabilityDepends on your HA setupKonnect is a SaaS SLA (uptime guaranteed)

Why choose Kong Konnect over self-managed Kong Gateway?

  • Faster to deploy: no need to install, configure, secure Control Plane.
  • Zero maintenance: no patching, backups, upgrades for control plane.
  • Global availability: Control Plane is multi-region by default.
  • Modern features: You get new Kong features faster (Konnect users get earlier access).
  • Built-in observability: native dashboards, logging, analytics ready.
  • Multi-tenant support: You can separate teams, apps, etc easily.
  • Reduced DevOps overhead: focus only on managing lightweight Data Planes.

JWTs and client certificates

Great question β€” JWTs and client certificates are both authentication methods, but they are not directly dependent on each other. They solve different security goals, and in some advanced setups, they can complement each other.

Let’s break it down:


πŸ”„ JWT vs. Client Certificate β€” Purpose

FeatureJWTClient Certificate (mTLS)
TypeToken-based authenticationCertificate-based mutual TLS (mTLS)
Validated ByApplication / API Gateway (e.g., Kong)TLS handshake (mutual authentication)
AuthenticatesWho you are (user/app identity)What you are (trusted machine or client)
RevocationHard to revoke unless you use a blacklistCan be revoked by CRL or OCSP
Statelessβœ… Yes, self-contained❌ No, cert revocation/status may require state
Setup ComplexityModerateHigher (requires PKI, CA, trust setup)

Kong Troubleshooting

Invalid status code received from the token endpoint” means Kong tried to exchange an authorization code for a token, but the PingFederate token endpoint replied with an error

302 Found:

  • Kong redirects the client to the authorization endpoint of PingFederate.
  • This is normal behavior during the initial OIDC flow (when no token is present).

401 Unauthorized (after redirect):

  • The client is redirected back to Kong with an authorization code.
  • Then Kong calls the token endpoint to exchange code β†’ tokens.
  • But this step fails (e.g., bad client credentials, redirect URI mismatch, wrong token endpoint).
  • Result: 401 Unauthorized, often shown to the user after the browser returns from the IdP.

A 400 Bad Request from the OpenID Connect token endpoint usually means something is wrong with the request payload you’re sending. This often happens during a token exchange or authorization code flow.

Let’s troubleshoot it step by step:

πŸ” Common Causes of 400 from Token Endpoint

  1. Invalid or missing parameters
    • Missing grant_type, client_id, client_secret, code, or redirect_uri
    • Using wrong grant_type (e.g., should be authorization_code, client_credentials, refresh_token, etc.)
  2. Mismatched or invalid redirect URI
    • Must match the URI registered with the provider exactly.
  3. Invalid authorization code
    • Expired or already used.
  4. Invalid client credentials
    • Bad client_id / client_secret
  5. Wrong Content-Type
    • The request should be: bashCopyEditContent-Type: application/x-www-form-urlencoded

To know why Ping returned 400, you need to:

  1. Check PingFederate logs – often shows detailed error like:

Invalid redirect_uri
Invalid client credentials
Unsupported grant_type

Kong is probably misconfigured or failing to capture the code from the redirect step before trying the token exchange.

This usually happens due to:

  • Misconfigured redirect_uri
  • Missing or misrouted callback handling (/callback)
  • Client app hitting the wrong route first
  • Kong OIDC plugin misconfigured (missing session_secret, or improper auth_methods)

Troubleshooting