Kong – proxy issue

Got it—your client runs in AWS and must use a corporate proxy. With mTLS, a client-side proxy can absolutely be the culprit. Here’s the playbook.

What’s happening

  • If the proxy does TLS inspection (MITM), it terminates TLS and re-signs with its own CA. Your server asks the proxy (not the real client) for a cert → it has none → server logs “trying to obtain a certificate from the client.”
  • The client may also see “unable to get local issuer certificate” because it’s validating the proxy’s substituted cert but doesn’t trust the proxy’s Root CA.

Decide the path (pick one)

A) Allow end-to-end mTLS (best):
Ask the proxy admins to bypass SSL inspection for your domain (add it to the proxy’s TLS bypass list) or ensure it does pure CONNECT tunneling. Then the real client cert reaches your server.

B) Two-hop mTLS (enterprise pattern):

  • Client ↔ Proxy: mTLS using Client-Cert-#1 (issued by proxy’s CA).
  • Proxy ↔ Your Server: mTLS using Client-Cert-#2 (issued by a CA your server trusts).
    Your server will authenticate the proxy’s identity, not the original client. If you need end-user identity, have the proxy forward vetted identity (headers/JWT) and verify it.

C) Temporary test bypass:
On that AWS instance, set NO_PROXY for your hostname if policy allows, and open egress 443 to your server to confirm the issue is the proxy.

Quick diagnostics (run both)

1) See if proxy is intercepting

# Behind the proxy
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts </dev/null | \
  openssl x509 -noout -issuer -subject -enddate

Compare issuer/subject to the same command from a machine without the proxy.
If issuers differ (e.g., “Zscaler/BlueCoat/Corp Root CA”), it’s inspecting.

2) Prove mTLS works through a tunnel

# If proxy supports pure CONNECT (no inspection)
curl -v https://YOUR_HOST/ \
  --proxy http://PROXY_HOST:PROXY_PORT \
  --cert client.crt --key client.key \
  --cacert server_ca_chain.pem

  • If your proxy itself is HTTPS, add: --proxy-cacert PROXY_ROOT_CA.pem.

Java client notes (common in AWS)

# Client cert for mTLS to your server
-Djavax.net.ssl.keyStore=/path/client.p12 -Djavax.net.ssl.keyStorePassword=***** -Djavax.net.ssl.keyStoreType=PKCS12
# Trust your server (or proxy if inspecting)
-Djavax.net.ssl.trustStore=/path/truststore.jks -Djavax.net.ssl.trustStorePassword=*****
# Proxy settings (CONNECT)
-Dhttps.proxyHost=PROXY_HOST -Dhttps.proxyPort=PROXY_PORT
# Bypass (if allowed)
-Dhttp.nonProxyHosts="|YOUR_HOST|*.yourdomain.com"

If inspection can’t be bypassed, you must import the proxy’s Root CA into the truststore, and if mTLS is required by your server, implement the two-hop mTLS model above.

What to send to the proxy team

  • “Please bypass TLS inspection (or allow pure CONNECT) for YOUR_HOST:443.”
  • If not possible: “We’ll run two-hop mTLS. We’ll trust your proxy’s client CA for the proxy→server leg; you’ll trust ours for client→proxy.”

If you share the outputs of the two openssl/curl tests (redact hostnames as needed), I’ll tell you exactly whether it’s inspection vs tunneling and the minimal change to make it work.

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

ClientMTLSAuth status absent

It means no client certificate was presented/validated on the TLS handshake—i.e., client mTLS auth did not happen for that request.

Think of it like Nginx’s $ssl_client_verify = NONE: the server (Cloudflare/F5/Kong) didn’t see a client cert, so mTLS status is absent.

Why it happens

  • Client didn’t send a cert (missing --cert/--key in curl, or browser/keystore not configured).
  • mTLS not required on that hop (e.g., F5/Cloudflare set to “request/ignore” instead of “require”; Kong’s mtls-auth plugin not enabled).
  • TLS terminates before Kong (Cloudflare/F5 terminates TLS; the origin can’t see the client cert unless that proxy enforces mTLS itself).
  • Wrong host/port/SNI that doesn’t have the mTLS policy.

Quick checks

# PEM pair
curl -vk https://api.example.com/path \
  --cert client.crt --key client.key

# or PKCS#12
curl -vk https://api.example.com/path \
  --cert-type P12 --cert client.p12:password

Look for SSL client certificate lines in the verbose output.

  • On Cloudflare/F5:
    • Require client certs on the relevant hostname/path.
    • Ensure the CA used to validate client certs is uploaded/selected.
    • If using Cloudflare, remember: the client’s cert terminates at Cloudflare; the origin (Kong) won’t receive it. Enforce mTLS at Cloudflare edge and use origin controls (e.g., Authenticated Origin Pulls) separately.
  • On Kong:
    • If using Enterprise mtls-auth plugin, enable it on the route/service and upload your CA certificate to /ca-certificates; reference it in the plugin.
    • If relying on Nginx-level mTLS, ensure the proxy listener is configured with ssl_client_certificate and ssl_verify_client on; (or the equivalent in your deployment).

Interpreting statuses

  • absent = no cert presented.
  • failed/invalid = cert presented but couldn’t be validated (untrusted/expired/CN mismatch).
  • success/verified = cert was validated.

If you share how traffic flows (Client → Cloudflare/F5 → Kong) and where you intend to enforce mTLS, I’ll give you the exact config and a working curl for that hop.

Kong troubleshooting

A 502 Bad Gateway error in your setup:

Client → LB → KONG → LB → Server

means that Kong Gateway cannot successfully connect to the upstream server behind the second load balancer. Here’s how to troubleshoot and resolve it step by step:


🔁 Understanding the Flow

  1. Client hits the first LB (e.g., AWS ELB, NGINX).
  2. That LB forwards traffic to Kong Gateway.
  3. Kong receives the request and proxies it to a second LB.
  4. That LB routes to your actual upstream service.

🛠️ Troubleshooting Checklist

✅ 1. Check Upstream URL in Kong

  • Run: curl -s http://<KONG_ADMIN>:8001/services/<service-name>
  • Ensure the upstream URL (url or host) points to a valid, resolvable address (DNS or IP) and correct port.

Example:

{
  "host": "internal-lb.yourdomain.local",
  "port": 8080,
  "protocol": "http"
}

✅ 2. Test Kong’s Network Reachability

From the Kong container or host:

curl -v http://internal-lb.yourdomain.local:8080/

If this fails:

  • DNS resolution might be broken
  • Port might be blocked
  • LB might not route correctly

✅ 3. Enable Debug Logs in Kong

In kong.conf or via environment variable:

log_level = debug

Then check:

tail -f /usr/local/kong/logs/error.log

Look for messages like:

  • upstream timed out
  • could not resolve host
  • connection refused

✅ 4. Check Health of Second LB and Backend

  • Ensure second LB is up
  • Verify backend servers are healthy and accepting connections
  • Check if Kong’s IP is allowed (firewall or security group)

✅ 5. Check Kong Route & Service Configuration

Validate route is defined correctly:

curl -s http://<KONG_ADMIN>:8001/routes

Make sure paths, hosts, or methods match the request.


🧪 Example Kong Service & Route Setup

# Service pointing to internal load balancer
curl -i -X POST http://localhost:8001/services \
  --data name=upstream-service \
  --data url=http://internal-lb.yourdomain.local:8080

# Route for the service
curl -i -X POST http://localhost:8001/services/upstream-service/routes \
  --data paths[]=/api


🚫 Common Causes of 502 with LB Behind Kong

ProblemSolution
DNS resolution failureUse IP or fix /etc/resolv.conf or CoreDNS
Port not exposed or wrongConfirm port with nc or curl
Second LB not forwarding correctlyCheck LB target groups and health checks
Kong plugins (e.g., OIDC, rate-limit) errorDisable plugins temporarily to isolate
HTTP vs HTTPS mismatchEnsure protocol matches (http vs https)
Timeout too shortIncrease proxy_read_timeout or similar

✅ Final Tips

  • Try curl directly from Kong to the backend server.
  • Use Kong’s health check endpoint if you’re using upstream targets: curl http://localhost:8001/upstreams/<name>/health

If you share:

  • the exact curl call to Kong
  • the relevant Kong service/route config
  • error.log content from Kong

The error message “upstream prematurely closed connection while reading response header from upstream” in Kong Gateway indicates that Kong attempted to read the response headers from the upstream service, but the connection was closed unexpectedly before the headers were fully received. This typically results in a 502 Bad Gateway error.

Common Causes

  1. Upstream Service Crashes or Terminates Connection Early:
    • The upstream application may crash, encounter an error, or intentionally close the connection before sending a complete response.
  2. Timeouts:
    • The upstream service takes too long to respond, exceeding Kong’s configured timeouts.
  3. Keepalive Connection Issues:
    • Persistent connections (keepalive) between Kong and the upstream service may be closed unexpectedly by the upstream, leading to this error.
  4. Protocol Mismatch:
    • Kong expects a certain protocol (e.g., HTTP/1.1), but the upstream service responds differently or uses an incompatible protocol.
  5. Large Response Headers:
    • The upstream service sends headers that exceed Kong’s buffer sizes, causing the connection to be closed prematurely.

es, a mismatch between the protocol specified in Kong’s service configuration and the actual protocol used by the upstream service can lead to the error:

“upstream prematurely closed connection while reading response header from upstream”

This typically occurs when Kong attempts to communicate with an upstream service over HTTP, but the upstream expects HTTPS, or vice versa.


🔍 Understanding the Issue

When Kong is configured to connect to an upstream service, it uses the protocol specified in the service’s configuration. If the upstream service expects HTTPS connections and Kong is configured to use HTTP, the SSL/TLS handshake will fail, leading to the connection being closed prematurely.

Yes, a mismatch between the protocol specified in Kong’s service configuration and the actual protocol used by the upstream service can lead to the error:

“upstream prematurely closed connection while reading response header from upstream”

This typically occurs when Kong attempts to communicate with an upstream service over HTTP, but the upstream expects HTTPS, or vice versa.


🔍 Understanding the Issue

When Kong is configured to connect to an upstream service, it uses the protocol specified in the service’s configuration. If the upstream service expects HTTPS connections and Kong is configured to use HTTP, the SSL/TLS handshake will fail, leading to the connection being closed prematurely.

For example, if your upstream service is accessible at https://api.example.com, but Kong is configured with:

bashCopyEditcurl -i -X POST http://localhost:8001/services \
  --data name=example-service \
  --data url=http://api.example.com

Kong will attempt to connect over HTTP, resulting in a failed connection.


✅ Solution

Ensure that the protocol in Kong’s service configuration matches the protocol expected by the upstream service.

If the upstream service requires HTTPS, configure the service in Kong accordingly:

bashCopyEditcurl -i -X POST http://localhost:8001/services \
  --data name=example-service \
  --data url=https://api.example.com

This ensures that Kong establishes a secure connection using HTTPS, aligning with the upstream service’s expectations.

Setting Up a YUM Repository (RHEL/CentOS)

A. Setting Up a YUM Repository (RHEL/CentOS)

1. Install and Configure a Web Server

Most repositories are served via HTTP. In this example, we’ll use Apache (httpd):

  1. Install Apache:bashCopyEditsudo yum install httpd -y
  2. Enable and start Apache:bashCopyEditsudo systemctl enable httpd sudo systemctl start httpd
  3. Verify that Apache is running:
    Open your browser and navigate to http://<your-server-IP>/ to see the Apache welcome page.

2. Create the Repository Directory

  1. Make a directory to hold your repository files:bashCopyEditsudo mkdir -p /var/www/html/myrepo
  2. Copy your RPM packages into this directory:bashCopyEditsudo cp /path/to/your/rpms/*.rpm /var/www/html/myrepo/

3. Generate Repository Metadata

  1. Install the createrepo tool:bashCopyEditsudo yum install createrepo -y
  2. Run createrepo in your repository directory:bashCopyEditsudo createrepo /var/www/html/myrepo This command creates a repodata directory with all the necessary metadata for the repository.

4. Set Proper Permissions

  1. Ensure Apache can read the files:bashCopyEditsudo chmod -R 755 /var/www/html/myrepo
  2. If SELinux is enabled, adjust the file context:bashCopyEditsudo chcon -R -t httpd_sys_content_t /var/www/html/myrepo

5. (Optional) Configure the Firewall

Make sure HTTP traffic is allowed:

  • For firewalld:bashCopyEditsudo firewall-cmd --permanent --add-service=http sudo firewall-cmd --reload

Ansible – host list

The error “host list declined parsing host file as it did not pass its verify_file() method” occurs when Ansible cannot properly parse your inventory file. This is typically due to an invalid format or incorrect file structure. Here’s how to troubleshoot and fix it:


Step 1: Verify Inventory File Format

Correct INI Format Example

Ensure your inventory file uses the proper INI-style syntax:

[webservers]

web1 ansible_host=192.168.1.10 ansible_user=root

web2 ansible_host=192.168.1.11 ansible_user=root

[dbservers]

db1 ansible_host=192.168.1.20 ansible_user=root

Correct YAML Format Example

If using a YAML-based inventory file, ensure it follows the correct structure:

all:

  hosts:

    web1:

      ansible_host: 192.168.1.10

      ansible_user: root

    web2:

      ansible_host: 192.168.1.11

      ansible_user: root

  children:

    dbservers:

      hosts:

        db1:

          ansible_host: 192.168.1.20

          ansible_user: root


Step 2: Check File Extension

  • For INI-style inventory, use .ini or no extension (e.g., inventory).
  • For YAML-style inventory, use .yaml or .yml.

Step 3: Test the Inventory File

Use the ansible-inventory command to validate the inventory file:

ansible-inventory -i inventory –list

  • Replace inventory with the path to your inventory file.
  • If there’s an error, it will provide details about what’s wrong.

 Step 4: Use Explicit Inventory Path

When running an Ansible command or playbook, explicitly specify the inventory file:

ansible all -i /path/to/inventory -m ping


 Step 5: Check Syntax Errors

  1. Ensure there are no trailing spaces or unexpected characters in your inventory file.
  2. Use a linter for YAML files if you suspect a YAML formatting issue:

yamllint /path/to/inventory.yaml


 Step 6: Set Permissions

Ensure the inventory file has the correct permissions so that Ansible can read it:

chmod 644 /path/to/inventory

Ansible playbook to run a shell script on your remote hosts:

#!/bin/bash
# example_script.sh

echo "Hello, this is a sample script."
echo "Current date and time: $(date)"

ansible

---
- name: Run Shell Script on Remote Hosts
  hosts: all
  become: yes
  tasks:
    - name: Copy Shell Script to Remote Hosts
      copy:
        src: /path/to/example_script.sh
        dest: /tmp/example_script.sh
        mode: '0755'

    - name: Run Shell Script
      command: /tmp/example_script.sh

    - name: Remove Shell Script from Remote Hosts
      file:
        path: /tmp/example_script.sh
        state: absent

Explanation:
Copy Shell Script: The copy module is used to transfer the shell script from the local machine to the remote hosts. The mode parameter ensures that the script is executable.

Run Shell Script: The command module is used to execute the shell script on the remote hosts.

Remove Shell Script: The file module is used to delete the shell script from the remote hosts after execution to clean up.

Running the Playbook:
To run the playbook, use the following command:

Kong – add custom plugins (ansible playbook)

---
- name: Deploy and enable a custom plugin in Kong
  hosts: kong_servers
  become: yes
  vars:
    plugin_name: "my_custom_plugin"
    plugin_source_path: "/path/to/local/plugin" # Local path to the plugin code
    kong_plugin_dir: "/usr/local/share/lua/5.1/kong/plugins" # Default Kong plugin directory
  tasks:

    - name: Ensure Kong plugin directory exists
      file:
        path: "{{ kong_plugin_dir }}/{{ plugin_name }}"
        state: directory
        mode: '0755'

    - name: Copy plugin files to Kong plugin directory
      copy:
        src: "{{ plugin_source_path }}/"
        dest: "{{ kong_plugin_dir }}/{{ plugin_name }}/"
        mode: '0644'

    - name: Verify plugin files were copied
      shell: ls -la "{{ kong_plugin_dir }}/{{ plugin_name }}"
      register: verify_plugin_copy
    - debug:
        var: verify_plugin_copy.stdout

    - name: Update Kong configuration to include the custom plugin
      lineinfile:
        path: "/etc/kong/kong.conf"
        regexp: "^plugins ="
        line: "plugins = bundled,{{ plugin_name }}"
        state: present
      notify: restart kong

    - name: Verify the plugin is enabled
      shell: kong config parse /etc/kong/kong.conf
      register: config_check
    - debug:
        var: config_check.stdout

  handlers:
    - name: restart kong
      service:
        name: kong
        state: restarted

Steps to install HPE Ezmeral 7.x on Linux cluster

Installing HPE Ezmeral Data Fabric (formerly MapR) version 7.x on a 12-node Linux cluster requires planning and configuration. Here are the detailed steps to install and configure the cluster:


Step 1: Prerequisites

  1. System Requirements:
    • 64-bit Linux (RHEL/CentOS 7 or 8, or equivalent).
    • Minimum hardware for each node:
      • Memory: At least 16GB RAM.
      • CPU: Quad-core or higher.
      • Disk: Minimum of 500GB of storage.
  2. Network Configuration:
    • Assign static IP addresses or hostnames to all 12 nodes.
    • Configure DNS or update /etc/hosts with the IP and hostname mappings.
    • Ensure nodes can communicate with each other via SSH.
  3. Users and Permissions:
    • Create a dedicated user for HPE Ezmeral (e.g., mapr).
    • Grant the user passwordless SSH access across all nodes.
  4. Firewall and SELinux:
    • Disable or configure the firewall to allow required ports.
    • Set SELinux to permissive mode:

sudo setenforce 0

sudo sed -i ‘s/^SELINUX=.*/SELINUX=permissive/’ /etc/selinux/config

  1. Java Installation:
    • Install Java (OpenJDK 11 recommended):

sudo yum install java-11-openjdk -y


Step 2: Download HPE Ezmeral Data Fabric Software

  1. Obtain Software:
    • Download the HPE Ezmeral 7.x installation packages from the official HPE Ezmeral website.
  2. Distribute Packages:
    • Copy the packages to all 12 nodes using scp or a similar tool.

Step 3: Install Core Services

  1. Install the Core Packages:
    • On each node, install the required packages:

sudo yum install mapr-core mapr-fileserver mapr-cldb mapr-webserver -y

  1. Install Additional Services:
    • Based on your use case, install additional packages (e.g., mapr-zookeeper, mapr-nodemanager, etc.).

Step 4: Configure ZooKeeper

  1. Select ZooKeeper Nodes:
  2. Choose three nodes to run the ZooKeeper service (e.g., node1, node2, node3).
  3. Edit the ZooKeeper Configuration:
  4. Update the ZooKeeper configuration file (/opt/mapr/zookeeper/zookeeper-<version>/conf/zoo.cfg) on the ZooKeeper nodes:

tickTime=2000

dataDir=/var/mapr/zookeeper

clientPort=2181

initLimit=5

syncLimit=2

server.1=node1:2888:3888

server.2=node2:2888:3888

server.3=node3:2888:3888

  1. Initialize ZooKeeper:
    • On each ZooKeeper node, create a myid file:

echo “1” > /var/mapr/zookeeper/myid  # Replace with 2 or 3 for other nodes

  1. Start ZooKeeper:

sudo systemctl start mapr-zookeeper


Step 5: Configure the Cluster

  1. Initialize the Cluster:
    • Run the cluster initialization command from one node:

/opt/mapr/server/configure.sh -C node1,node2,node3 -Z node1,node2,node3

  • Replace node1,node2,node3 with the actual hostnames of the CLDB and ZooKeeper nodes.
  1. Verify Installation:
    • Check the cluster status:

maprcli cluster info

  1. Add Nodes to the Cluster:
    • On each additional node, configure it to join the cluster:

/opt/mapr/server/configure.sh -N <cluster_name> -C node1,node2,node3 -Z node1,node2,node3


Step 6: Start Core Services

  1. Start CLDB:
    • Start the CLDB service on the designated nodes:

sudo systemctl start mapr-cldb

  1. Start FileServer and WebServer:
    • Start the file server and web server services on all nodes:

sudo systemctl start mapr-fileserver

sudo systemctl start mapr-webserver

  1. Start Node Manager:
  2. If using YARN, start the Node Manager service on all nodes:

sudo systemctl start mapr-nodemanager


Step 7: Post-Installation Steps

  1. Access the Web Interface:
    • Open a browser and go to the web interface of your cluster:

http://<CLDB-node-IP&gt;:8443

  • Log in using the mapr user credentials.
  1. Add Storage:
    • Add storage disks to the cluster using the web interface or CLI:

maprcli disk list

maprcli disk add -all

  1. Secure the Cluster:
    • Enable Kerberos or LDAP integration for authentication.
    • Configure SSL/TLS for secure communication.

Step 8: Verify the Cluster

  1. Check Services:
    • Verify that all services are running:

maprcli node list -columns svc

  1. Run a Test:
  2. Create a directory in the file system and test file creation:

hadoop fs -mkdir /test

hadoop fs -put /path/to/local/file /test

Ping auth plugin

The Ping Auth plugin for Kong API Gateway integrates with Ping Identity’s PingAuthorize to provide attribute-based access control and policy decisions. This plugin allows Kong to utilize Ping products for dynamic authorization, making it easier to control access to your API resources1.

Key Features:

  • Attribute-Based Access Control (ABAC): Allows fine-grained access control based on user attributes and policies.
  • Dynamic Authorization: Policies can be updated in real-time without redeploying the API gateway.
  • Mutual TLS (mTLS): Supports client certificate authentication using mTLS.
  • Sideband API Protocol: Communicates with Ping services to retrieve authorization decisions.

Installation:

  1. Download the Plugin: Get the ping-auth plugin from Luarocks.
  2. Install the Plugin: Use LuaRocks to install the plugin:

luarocks install kong-plugin-ping-auth

  1. Configure Kong: Add the plugin to your Kong configuration:

plugins = bundled,ping-auth

  1. Apply the Plugin: Enable and configure the plugin via Kong’s admin UI or API.

Example Configuration:

plugins:

  – name: ping-auth

    config:

      service_url: “https://your-ping-service/policy&#8221;

This setup allows Kong to communicate with Ping services to handle authorization decisions.

Would you like more details on configuring this plugin or any other specific information?

https://github.com/pingidentity/kong-plugin-ping-auth

High Availability (HA) for two Kong API Gateway instances

To set up High Availability (HA) for two Kong API Gateway instances, you need to configure them to work seamlessly, ensuring reliability and fault tolerance. Below are the steps in detail:


1. HA Architecture Overview

In an HA setup, two Kong Gateway instances are deployed behind a load balancer. Both instances share the same configuration data, stored in a database (e.g., PostgreSQL or Cassandra), or operate in DB-less mode if configuration is managed via automation.

Components of the Architecture

  • Kong API Gateway Instances: Two or more Kong nodes deployed on separate servers.
  • Load Balancer: Distributes traffic to Kong nodes.
  • Database (Optional): A shared PostgreSQL or Cassandra instance for storing configuration if not using DB-less mode.
  • Health Monitoring: Ensures requests are routed only to healthy Kong nodes.

2. Setup Steps

Step 1: Install Kong on Two Nodes

  1. Follow the Kong installation guide for your operating system.
    • Kong Installation Guide
  2. Ensure both nodes are installed with the same version of Kong.

Step 2: Configure a Shared Database (If Not Using DB-less Mode)

Database Setup:

  1. Install PostgreSQL or Cassandra on a separate server (or cluster for HA).
  2. Create a Kong database user and database.
    • Example for PostgreSQL:

CREATE USER kong WITH PASSWORD ‘kong’;

CREATE DATABASE kong OWNER kong;

  1. Update the kong.conf file on both nodes to point to the shared database:

database = postgres

pg_host = <DATABASE_IP>

pg_port = 5432

pg_user = kong

pg_password = kong

pg_database = kong

  1. Run the Kong migrations (only on one node):

kong migrations bootstrap


Step 3: DB-less Mode (Optional)

If you prefer DB-less mode for better scalability and faster failover:

  1. Use declarative configuration with a YAML file (kong.yml).
  2. Place the configuration file on both Kong nodes.
  3. Set the kong.conf file to use DB-less mode:

database = off

declarative_config = /path/to/kong.yml


Step 4: Configure Load Balancer

Set up a load balancer to distribute traffic between the two Kong instances.

Options:

  • F5, HAProxy, or NGINX for on-premises environments.
  • AWS Elastic Load Balancer (ELB) for cloud-based setups.

Configuration Example:

  • Backend pool: Add both Kong instances (Node1_IP and Node2_IP).
  • Health checks: Use HTTP health checks to monitor the /status endpoint of Kong.

curl -X GET http://<KONG_INSTANCE&gt;:8001/status


Step 5: Synchronize Configuration Across Nodes

For consistency, ensure both Kong nodes share the same configuration.

DB-Mode Synchronization:

  • Configurations are automatically synchronized via the shared database.

DB-less Mode Synchronization:

  • Use configuration management tools like Ansible, Terraform, or CI/CD pipelines to deploy the kong.yml file to all nodes.

Step 6: Enable Kong Clustering (Optional)

If using Cassandra as the database, configure Kong clustering for HA.

  1. Enable clustering in the kong.conf file:

cluster_listen = 0.0.0.0:7946

cluster_listen_rpc = 127.0.0.1:7373

  1. Ensure that ports 7946 (for gossip communication) and 7373 (for RPC communication) are open between Kong nodes.

Step 7: Configure SSL (TLS)

  1. Generate SSL certificates for your domain.
  2. Configure Kong to use these certificates for the gateway.

curl -X POST http://<KONG_ADMIN_API>/certificates \

  –data “cert=@/path/to/cert.pem” \

  –data “key=@/path/to/key.pem” \

  –data “snis[]=example.com”


Step 8: Test the Setup

Health Check:

  • Verify the /status endpoint on both nodes:

curl -X GET http://<KONG_NODE_IP&gt;:8001/status

Request Routing:

  • Send a test request through the load balancer:

curl -X GET http://<LOAD_BALANCER_IP&gt;:8000/your-api

  • Verify logs on both Kong instances to ensure traffic is distributed.

Example HA Diagram

plaintext

Copy code

                       +———————–+

                       |    Load Balancer      |

                       | (F5, ELB, HAProxy)    |

                       +———-+————+

                                  |

                +—————–+—————–+

                |                                    |

    +———–+———–+           +———–+———–+

    |   Kong Instance 1       |           |   Kong Instance 2      |

    |    (Node1_IP)           |           |    (Node2_IP)           |

+———–+————+           +———–+————+

                |                                   |

    +———–+————+           +———–+————+

    | Shared Database (DB-Mode) OR Shared Config File (DB-less Mode)|

    +———————–+—————————————-+


Best Practices for Kong HA

  1. Load Balancer Health Checks:
    • Ensure your load balancer only forwards requests to healthy Kong instances.
  2. Database High Availability:
    • Use a clustered database for the shared configuration.
  3. Monitoring:
    • Integrate Kong with monitoring tools (e.g., Prometheus, Grafana) to track performance.
  4. Rate Limiting:
    • Configure plugins for rate limiting to prevent node overload.
  5. Session Persistence:
    • Use sticky sessions if required by your application.

By following these steps, you’ll achieve a robust, highly available Kong Gateway setup. Let me know if you need help with specific configurations!