It maps incoming requests (based on path, method, host, headers, etc.) to a specific Service.
One service can have multiple routes (e.g., /v1/users, /v1/orders, etc.).
The Flow
Client --> Kong Gateway --> [Route matched] --> [Service invoked] --> Upstream API
Kong receives a request.
It checks the Route rules (e.g., path, host).
If a match is found, it forwards the request to the associated Service.
The upstream service processes it and returns the response via Kong.
Plugins
You can attach plugins (like rate-limiting, auth, logging) at:
Global level (applies to all requests)
Service level (applies to all routes of a service)
Route level (applies only to that route)
Quick Example via curl
# Create a service
curl -i -X POST http://localhost:8001/services \
--data name=example-service \
--data url=http://httpbin.org
# Create a route to that service
curl -i -X POST http://localhost:8001/services/example-service/routes \
--data paths[]=/test
Now, accessing http://localhost:8000/test will proxy to http://httpbin.org.
Example 2:
Absolutely. Let’s break down how Services and Routes work in Kong Gateway, with a clear explanation, flow, and examples.
πΉ What Is a Service in Kong?
A Service in Kong Gateway represents an upstream API or microservice. This is the destination Kong forwards requests to. You define a service with details like the host, port, and protocol (HTTP, HTTPS, TCP, etc.).
A client sends a request to Kong (e.g., GET https://api.myapp.com/users).
Kong matches the request to a Route based on the defined criteria.
The Route is linked to a Service.
Kong forwards the request to the upstream Service.
The response is returned to the client.
π Flow Diagram (Simplified)
Client
|
v
[Request: GET /users]
|
v
[Kong Gateway]
|
βββΊ Match Route: "/users" β user-service
|
βββΊ Forward to: http://user-api.internal:8080
|
v
[Upstream Service Response]
|
v
[Kong Gateway]
|
v
Client
Here’s a minimal yet flexible Ansible playbook to deploy routes in Kong Gateway using the uri module to interact with the Admin API. This example assumes:
Kong Admin API is accessible (e.g. http://localhost:8001).
A service is already defined (or you can create one using Ansible).
Would you like this to support idempotency (check if the route already exists and update it), or include support for Kong Konnect or authentication headers (e.g. RBAC tokens)?
1. Missing or Invalid API Key (Key Authentication Plugin)
If you’re using the Key Authentication plugin, Kong expects a valid API key in the request.(Kong Docs)
Symptom: Receiving a 401 error with the message "No API key found in request" or "Invalid authentication credentials".(Turbolab Technologies)
Solution:
Create a Consumer: curl -i -X POST http://localhost:8001/consumers/ --data "username=your_username"
Assign an API Key to the Consumer: curl -i -X POST http://localhost:8001/consumers/your_username/key-auth --data "key=your_api_key"
Include the API Key in Your Request: curl -i http://lo calhost:8000/your_service \ -H "apikey: your_api_key"
Note: The header name (apikey) should match the key_names configured in your Key Authentication plugin.(Kong Docs)
2. Expired or Invalid JWT Token (JWT Plugin)
If you’re using the JWT plugin, ensure that the token is valid and not expired.
Symptom: Receiving a 401 error with the message "Invalid token" or "Token expired".
Solution:
Verify the Token: Use tools like jwt.io to decode and inspect the token.
Check Expiration: Ensure the exp claim has not passed.
Validate Signature: Confirm that the token’s signature matches the expected signing key.(Auth0 Community)
3. OpenID Connect (OIDC) Plugin Issues
When using the OIDC plugin, a 401 error can occur if the plugin’s cache is outdated after changes to the Identity Provider (IdP).(support.konghq.com)
Symptom: Receiving a 401 error after updating the IdP’s signing certificate.
Solution:
Clear Plugin Cache: Restart Kong or clear the plugin’s cache to fetch the updated signing keys from the IdP.
Verify IdP Configuration: Ensure that the IdP’s metadata URL is correctly configured in the plugin settings.
4. Plugin Configuration on Specific Routes or Services
Ensure that authentication plugins are correctly configured on the intended routes or services.
Symptom: Receiving a 401 error on specific endpoints.
Solution:
Check Plugin Association: Use the Admin API to list plugins associated with your routes or services. curl http://localhost:8001/plugins
Adjust Configuration: Ensure that the plugin is enabled and correctly configured for the desired routes or services.
π οΈ Additional Troubleshooting Steps
Inspect Kong Logs: Check the Kong error logs for detailed information.
tail -f /usr/local/kong/logs/error.log
Use Debug Headers: Add the X-Kong-Debug: 1 header to your requests to get more insight into how Kong processes them.
Verify Plugin Order: Ensure that authentication plugins are executed in the correct order, especially when multiple plugins are used.
If you can provide more details about your setupβsuch as the authentication plugins in use, specific routes or services affected, and sample requestsβI can offer more targeted assistance.
πΉ Tagging helps isolate what to export (tag services, routes, plugins with my-service-tag beforehand). πΉ You can also export the full config with deck dump.
Option B: Use cURL (for manual/scripted extraction)
If you turn on (start/deploy)cattle-system pods (especially cattle-cluster-agent) and it crashes your cluster, it means the Rancher agents are broken or misconfigured and are overloading, blocking, or breaking Kubernetes internally.
In detail, hereβs why this can happen:
Cause
What Happens
Why It Crushes the Cluster
β Rancher agents fail to connect and keep retrying
They flood the Kubernetes API server with reconnect attempts
API server gets overloaded, becomes unresponsive
β Wrong Rancher URL or network broken
Agents enter infinite loops trying to reach Rancher
Rancher agents are deeply tied into your Kubernetes cluster. If they are broken, they spam your Kubernetes system like crazy, and that overloads and crashes your cluster.
Did you recently upgrade Rancher or move Rancher server (IP change, domain change)?
Did you renew SSL/TLS certificates on Rancher?
Or is this a new cluster registration you are trying?
Depending on your answer, I will tell you exactly how to fix it. Would you like me to guide you step-by-step? π (If yes, tell me Rancher version too β 2.6.x, 2.7.x, 2.8.x, etc.)
This is a serious architecture conflict:
Rancherβs cattle-node-agent is destabilizing your AWX deployment.
And the reason is resource exhaustion or cluster network/messaging overload.
π₯ Why does this happen?
Cause
Description
Why it breaks AWX
Resource exhaustion
Rancher’s cattle-node-agent is heavy (especially when reconnecting or retrying). It uses a lot of CPU, memory, and bandwidth.
AWX Postgres and Web pods starve for CPU/memory or lose network
Node pressure
When 2 cattle-node-agents come alive, node CPU/mem pressure increases.
#!/bin/bash
# === CONFIGURATION ===
LICENSE_FILE="/path/to/your/maprlicense.txt"
PRIMARY_NODE="cldb-node.example.com" # change to your CLDB node hostname or IP
# === STEP 1: Copy license to the primary node ===
echo "Copying license file to $PRIMARY_NODE..."
scp "$LICENSE_FILE" "$PRIMARY_NODE:/tmp/maprlicense.txt"
# === STEP 2: Add the license on the cluster ===
echo "Adding license using maprcli on $PRIMARY_NODE..."
ssh "$PRIMARY_NODE" "sudo maprcli license add -license /tmp/maprlicense.txt"
# === STEP 3: Verify license ===
echo "Verifying license status..."
ssh "$PRIMARY_NODE" "sudo maprcli license list -json | jq ."
echo "β License added and verified successfully!"
How Cluster ID is normally generated
When you first install MapR, you run something like:
sudo /opt/mapr/server/configure.sh -C <cldb-nodes> -Z <zookeeper-nodes> -N <cluster-name>
When configure.sh runs for the first time:
It creates /opt/mapr/conf/clusterid
It creates /opt/mapr/conf/mapr-clusters.conf
It registers your node with CLDB
The Cluster ID is a random large number, created automatically.
If you need to (re)generate a Cluster ID manually:
If you're setting up a new cluster, and no CLDB is initialized yet, you can force generate a Cluster ID like this:
Stop Warden if running:
sudo systemctl stop mapr-warden
Clean old config (careful, if cluster already had data, don't do this):
sudo rm -rf /opt/mapr/conf/clusterid
sudo rm -rf /opt/mapr/conf/mapr-clusters.conf
Re-run configure.sh:
Example:
sudo /opt/mapr/server/configure.sh -N mycluster.example.com -C cldb-node1,cldb-node2 -Z zk-node1,zk-node2
-N: Cluster Name
-C: CLDB nodes
-Z: ZooKeeper nodes
After that:
cat /opt/mapr/conf/clusterid
β You will now see the new Cluster ID!
How Cluster ID is normally generated
When you first install MapR, you run something like:
sudo /opt/mapr/server/configure.sh -C <cldb-nodes> -Z <zookeeper-nodes> -N <cluster-name>
When configure.sh runs for the first time:
It creates /opt/mapr/conf/clusterid
It creates /opt/mapr/conf/mapr-clusters.conf
It registers your node with CLDB
β The Cluster ID is a random large number, created automatically.
π₯ If you need to (re)generate a Cluster ID manually:
If you're setting up a new cluster, and no CLDB is initialized yet, you can force generate a Cluster ID like this:
Stop Warden if running:
bash
Copy
Edit
sudo systemctl stop mapr-warden
Clean old config (careful, if cluster already had data, don't do this):
bash
Copy
Edit
sudo rm -rf /opt/mapr/conf/clusterid
sudo rm -rf /opt/mapr/conf/mapr-clusters.conf
Re-run configure.sh:
Example:
bash
Copy
Edit
sudo /opt/mapr/server/configure.sh -N mycluster.example.com -C cldb-node1,cldb-node2 -Z zk-node1,zk-node2
-N: Cluster Name
-C: CLDB nodes
-Z: ZooKeeper nodes
After that:
bash
Copy
Edit
cat /opt/mapr/conf/clusterid
β You will now see the new Cluster ID!
Great question β JWTs and client certificates are both authentication methods, but they are not directly dependent on each other. They solve different security goals, and in some advanced setups, they can complement each other.