Kong Troubleshooting

Invalid status code received from the token endpoint” means Kong tried to exchange an authorization code for a token, but the PingFederate token endpoint replied with an error

302 Found:

  • Kong redirects the client to the authorization endpoint of PingFederate.
  • This is normal behavior during the initial OIDC flow (when no token is present).

401 Unauthorized (after redirect):

  • The client is redirected back to Kong with an authorization code.
  • Then Kong calls the token endpoint to exchange code β†’ tokens.
  • But this step fails (e.g., bad client credentials, redirect URI mismatch, wrong token endpoint).
  • Result: 401 Unauthorized, often shown to the user after the browser returns from the IdP.

A 400 Bad Request from the OpenID Connect token endpoint usually means something is wrong with the request payload you’re sending. This often happens during a token exchange or authorization code flow.

Let’s troubleshoot it step by step:

πŸ” Common Causes of 400 from Token Endpoint

  1. Invalid or missing parameters
    • Missing grant_type, client_id, client_secret, code, or redirect_uri
    • Using wrong grant_type (e.g., should be authorization_code, client_credentials, refresh_token, etc.)
  2. Mismatched or invalid redirect URI
    • Must match the URI registered with the provider exactly.
  3. Invalid authorization code
    • Expired or already used.
  4. Invalid client credentials
    • Bad client_id / client_secret
  5. Wrong Content-Type
    • The request should be: bashCopyEditContent-Type: application/x-www-form-urlencoded

To know why Ping returned 400, you need to:

  1. Check PingFederate logs – often shows detailed error like:

Invalid redirect_uri
Invalid client credentials
Unsupported grant_type

Kong is probably misconfigured or failing to capture the code from the redirect step before trying the token exchange.

This usually happens due to:

  • Misconfigured redirect_uri
  • Missing or misrouted callback handling (/callback)
  • Client app hitting the wrong route first
  • Kong OIDC plugin misconfigured (missing session_secret, or improper auth_methods)

Troubleshooting

Migrating from Kong Gateway to Kong Konnect

Migrating from Kong Gateway (self-managed/on-prem) to Kong Konnect (cloud-managed) involves a combination of:

  • Exporting your current Kong configuration
  • Translating any on-prem customizations or plugins
  • Importing services and routes into Konnect
  • Updating auth, plugins, and Dev Portal configuration
  • Re-pointing your traffic and observability tools

Here’s a step-by-step migration plan with optional tooling for automation:


Step 1: Inventory Your Current Kong Gateway

Start by identifying all current components:

  • Services
  • Routes
  • Plugins
  • Consumers & credentials
  • RBAC users & roles
  • Custom plugins (if any)
  • Certificates
  • Upstreams / Targets
  • Rate limiting or security policies

You can use:

deck dump --kong-addr http://<admin-api>:8001 --output-file kong-export.yaml

This uses decK, a declarative config tool for Kong.


Step 2: Set Up Kong Konnect

  1. Sign up for Kong Konnect
  2. Create a Runtime Group (this is where your data plane will connect)
  3. Install Kong Gateway (with Konnect mode) as the Data Plane: curl -O https://download.konghq.com/gateway-3.x-centos/Packages/k/kong-3.x.rpm Configure it with: yamlCopyEditrole: data_plane cluster_control_plane: <Konnect CP endpoint> cluster_telemetry_endpoint: <Telemetry CP endpoint>

Step 3: Translate & Import Configuration

Use decK to sync into Konnect:

bashCopyEditdeck sync --konnect-runtime-group <runtime-group-name> \
          --konnect-token <your-token> \
          --state kong-export.yaml

DecK v1.16+ supports direct Konnect import via --konnect flags.

Note: decK does not migrate:

  • RBAC user roles
  • Developer Portal assets (you’ll need to re-upload manually)
  • Custom plugins (must be re-implemented and built for Konnect if supported)

Step 4: Migrate Authentication & Plugins

  • Consumers / Auth: Recreate consumers in Konnect or use Konnect Dev Portal to register apps
  • Certificates: Re-upload any TLS certs to Konnect
  • Custom Plugins: Migrate only if they are supported on Kong Konnect. Otherwise, consider rewriting logic using Lua/Python and submit to Kong support if needed.

Step 5: Reconfigure Observability

Kong Konnect offers built-in integrations:

  • Logs: Datadog, HTTP log, Splunk (via plugin)
  • Metrics: Prometheus, Kong Vitals
  • Use the Konnect GUI or API to configure logging plugins

Step 6: Redirect Traffic to Konnect Runtime

  • Update DNS or Load Balancer to send traffic to new Konnect Data Plane IPs
  • Perform traffic shadowing/canary if needed

Final Step: Validation & Cutover

  • Smoke test all endpoints
  • Test rate limits, auth flows, consumer access
  • Validate logs and metrics collection
  • Disable/decommission legacy Kong Gateway only after validation

Databricks

Databricks is a cloud-based data platform built for data engineering, data science, machine learning, and analytics. It provides a unified environment that integrates popular open-source tools like Apache Spark, Delta Lake, and MLflow, and is designed to simplify working with big data and AI workloads at scale.


What Databricks Does

Databricks allows you to:

  • Ingest, clean, and transform large volumes of data
  • Run machine learning models and notebooks collaboratively
  • Perform interactive and batch analytics using SQL, Python, R, Scala, and more
  • Securely govern and share data across teams and workspaces

Core Components

ComponentDescription
Databricks WorkspaceYour development environment for notebooks, jobs, and clusters
ClustersScalable compute resources (based on Apache Spark)
Delta LakeOpen-source storage layer that adds ACID transactions and versioning to data lakes
Unity CatalogCentralized data governance and access control layer
MLflowManages the lifecycle of machine learning experiments, models, and deployments
JobsScheduled or triggered ETL pipelines and batch workloads
SQL WarehousesServerless SQL compute for BI and analytics workloads

Runs on Major Clouds

  • AWS
  • Microsoft Azure
  • Google Cloud

Use Cases

  • Data lakehouse architecture
  • ETL/ELT processing
  • Business intelligence and analytics
  • Real-time streaming data processing
  • Machine learning and MLOps
  • GenAI development using large language models

Quick Analogy:

Think of Databricks as a “data factory + AI lab + SQL analytics tool” all in one, built on top of scalable cloud compute and storage.

shell script

!/bin/bash

List of your servers (can be IPs or hostnames)

SERVERS=(
server1.example.com
server2.example.com
server3.example.com
server4.example.com
server5.example.com
server6.example.com
)

FILE_PATH=”/opt/pfengine/file.txt”

for server in “${SERVERS[@]}”; do
echo “πŸ” Checking $FILE_PATH on $server”

ssh -o ConnectTimeout=5 “$server” “ls -l $FILE_PATH” 2>/dev/null

if [ $? -ne 0 ]; then
echo “❌ Could not access file on $server”
fi

echo “————————————–“
done

Allow LDAP users to access the Kong Manager GUI in Kong Gateway

To allow LDAP users to access the Kong Manager GUI in Kong Gateway Enterprise 3.4, you’ll need to integrate LDAP authentication via the Kong Enterprise Role-Based Access Control (RBAC) system.

Here’s how you can get it working step-by-step πŸ‘‡


πŸ‘€ Step 1: Configure LDAP Authentication for Kong Manager

Edit your kong.conf or pass these as environment variables if you’re using a container setup.

admin_gui_auth = ldap-auth
admin_gui_auth_conf = {
  "ldap_host": "ldap.example.com",
  "ldap_port": 389,
  "ldap_base_dn": "dc=example,dc=com",
  "ldap_attribute": "uid",
  "ldap_bind_dn": "cn=admin,dc=example,dc=com",
  "ldap_password": "adminpassword",
  "start_tls": false,
  "verify_ldap_host": false
}

βœ… If you’re using LDAPS, set ldap_port = 636 and start_tls = false or configure accordingly.

Restart Kong after updating this config.


πŸ‘₯ Step 2: Create an RBAC User Linked to the LDAP Username

Kong still needs an RBAC user that maps to the LDAP-authenticated identity.

curl -i -X POST http://localhost:8001/rbac/users \
  --data "name=jdoe" \
  --data "user_token=jdoe-admin-token"

The name here must match the LDAP uid or whatever attribute you configured with ldap_attribute.


πŸ” Step 3: Assign a Role to the RBAC User

curl -i -X POST http://localhost:8001/rbac/users/jdoe/roles \
  --data "roles=read-only"  # Or "admin", "super-admin", etc.

Available roles: read-only, admin, super-admin, or your own custom roles.


πŸ”“ Step 4: Log into Kong Manager with LDAP User

Go to your Kong Manager GUI:

https://<KONG_MANAGER_URL>:8445

Enter:

  • Username: jdoe (LDAP uid)
  • Password: LDAP user’s actual password (Kong will bind to LDAP and verify it)

πŸ› οΈ Optional: Test LDAP Config from CLI

You can test the LDAP binding from Kong CLI:

curl -i -X POST http://localhost:8001/rbac/users \
  --data "name=testuser" \
  --data "user_token=test123"

Then try logging into Kong Manager with testuser using their LDAP password.


Kong logs (2 Zones, 4 Servers—-> Splunk)

In Your Setup:

each zone has its own shared DB:

  • Zone A (A1 & A2) β†’ DB-A
  • Zone B (B1 & B2) β†’ DB-B

That implies:

  • You need to configure the plugin in both DBs (once per zone).
  • So you only need to enable the plugin once per zone, using the Admin API on one node in each zone.

βœ… What You Should Do:

  1. Run this plugin setup command on one Kong node per zone (e.g., A1 and B1):

bash

CopyEdit

curl -i -X POST http://localhost:8001/plugins/ \

  –data “name=http-log” \

  –data “config.http_endpoint=https://splunk-hec.example.com:8088/services/collector” \

  –data “config.method=POST” \

  –data “config.headers[Authorization]=Splunk YOUR-HEC-TOKEN” \

  –data “config.queue.size=1000”

  1. Confirm it’s active via:

curl http://localhost:8001/plugins


πŸ›‘οΈ Bonus Tip: Tag Your Logs by Node/Zone

To make Splunk logs more useful, you can:

  • Add custom headers or query parameters with zone info.
  • Use a transform or custom_fields in Splunk to tag logs from Zone A vs B.

Example:

–data “config.headers[X-Kong-Zone]=zone-a”

convert .crt to .p12

ChatGPT said:

To convert a .crt file to a .p12 (PKCS#12) file, you’ll need:

  • Your certificate file (your_cert.crt)
  • The private key file (your_key.key)
  • (Optional) The CA bundle file (ca_bundle.crt)

You can use the openssl command like this:

bashCopyEditopenssl pkcs12 -export \
  -in your_cert.crt \
  -inkey your_key.key \
  -out your_cert.p12 \
  -name "your_alias" \
  -certfile ca_bundle.crt

Explanation:

  • -in your_cert.crt: Your certificate
  • -inkey your_key.key: Your private key
  • -out your_cert.p12: Output .p12 file
  • -name "your_alias": Alias for the key in the keystore
  • -certfile ca_bundle.crt: (Optional) Chain/intermediate certificates

MarR disk – mapr_disk_cleanup_and_setup.yml

---
- name: Clean and prepare MapR disks
  hosts: mapr_nodes
  become: yes
  tasks:

    - name: Stop MapR Warden
      service:
        name: mapr-warden
        state: stopped

    - name: Unmount and wipe disks (/dev/sdb /dev/sdc /dev/sdd)
      shell: |
        for disk in /dev/sdb /dev/sdc /dev/sdd; do
          umount ${disk}* 2>/dev/null || true
          wipefs -a $disk
          sgdisk --zap-all $disk
        done
      ignore_errors: yes

    - name: Create disk list for MapR
      copy:
        dest: /tmp/disk.list
        content: |
          /dev/sdb
          /dev/sdc
          /dev/sdd

    - name: Run disksetup with -F (force)
      shell: /opt/mapr/server/disksetup -F /tmp/disk.list

    - name: Start MapR Warden
      service:
        name: mapr-warden
        state: started

    - name: Wait for Warden to stabilize
      wait_for:
        path: /opt/mapr/logs/warden.log
        search_regex: "Starting all services"
        timeout: 60

    - name: Verify MapR disk list
      shell: maprcli disk list
      register: disk_status

    - name: Show disk list output
      debug:
        var: disk_status.stdout

kong – failover test

On-Premises Failover Scenario – 4 Servers Across 2 Zones

Physical Layout

  • Zone A (Rack A): Server A1, Server A2
  • Zone B (Rack B): Server B1, Server B2
  • Each zone is in separate racks, power circuits, possibly even separate rooms/buildings (if possible).
  • Redundant network paths and power per zone.

βš™οΈ Typical Architecture

  • HA Load Balancer: LVS / HAProxy / Keepalived / F5 (active-passive or active-active)
  • Heartbeat/Health Monitoring: Keepalived, Corosync, or Pacemaker
  • Shared State (optional): GlusterFS, DRBD, etcd, or replicated DB

πŸ” Failover Scenarios

1. Single Server Failure (e.g., A1 goes down)

  • Load balancer marks A1 as unhealthy.
  • A2, B1, B2 continue serving traffic.
  • No impact to availability.

2. Zone Failure (e.g., entire Rack A fails β€” A1 and A2)

  • Power/network failure in Zone A.
  • Load balancer detects both A1 and A2 as down.
  • All traffic is redirected to B1 and B2.
  • Ensure B1/B2 can handle the full load.

3. Intermittent Network Failure in One Zone

  • Heartbeat may detect nodes as “split”.
  • Use quorum-based or fencing mechanisms (STONITH) to avoid split-brain.
  • Pacemaker/Corosync can help in cluster management and decision-making.

4. Load Balancer Node Fails

  • Use HA load balancer pair with VRRP (Keepalived) or hardware failover.
  • Virtual IP (VIP) is moved to the standby node.

5. Storage or DB Node Fails

  • If using shared storage or clustered DBs:
    • Ensure data replication (synchronous if possible).
    • Use quorum-aware systems (odd number of nodes ideal, maybe an external arbitrator).
    • DRBD or GlusterFS with quorum can help avoid data corruption.

Kong – generate client cert

When generating a client certificate for Kong, you generally need to provide the .crt and .key files to the client. However, the .pem file can also be used, depending on the application’s needs.

Here’s how each file is used:

  1. .crt (Certificate File) – This contains the public certificate of the client.
  2. .key (Private Key File) – This holds the private key for the client.
  3. .pem (Privacy-Enhanced Mail Format) – This can contain both the certificate and private key (and sometimes even intermediate certificates) in a single file.

What Should You Provide to the Client?

  • If the client explicitly needs separate certificate and key files, provide:
    • client.crt
    • client.key
  • If the client can handle a single PEM file, provide:
    • client.pem (which includes both the certificate and private key)

To generate a PEM file from .crt and .key:

cat client.crt client.key > client.pem

πŸ”Ή Use Case:

  • Some applications and libraries (e.g., cURL, OpenSSL, and certain API clients) accept a single PEM file instead of separate .crt and .key files.
  • If the client is using mutual TLS (mTLS) authentication with Kong, check if they need a .pem file instead.