To set up High Availability (HA) for two Kong API Gateway instances, you need to configure them to work seamlessly, ensuring reliability and fault tolerance. Below are the steps in detail:
1. HA Architecture Overview
In an HA setup, two Kong Gateway instances are deployed behind a load balancer. Both instances share the same configuration data, stored in a database (e.g., PostgreSQL or Cassandra), or operate in DB-less mode if configuration is managed via automation.
Components of the Architecture
- Kong API Gateway Instances: Two or more Kong nodes deployed on separate servers.
- Load Balancer: Distributes traffic to Kong nodes.
- Database (Optional): A shared PostgreSQL or Cassandra instance for storing configuration if not using DB-less mode.
- Health Monitoring: Ensures requests are routed only to healthy Kong nodes.
2. Setup Steps
Step 1: Install Kong on Two Nodes
- Follow the Kong installation guide for your operating system.
- Kong Installation Guide
- Ensure both nodes are installed with the same version of Kong.
Step 2: Configure a Shared Database (If Not Using DB-less Mode)
Database Setup:
- Install PostgreSQL or Cassandra on a separate server (or cluster for HA).
- Create a Kong database user and database.
- Example for PostgreSQL:
CREATE USER kong WITH PASSWORD ‘kong’;
CREATE DATABASE kong OWNER kong;
- Update the kong.conf file on both nodes to point to the shared database:
database = postgres
pg_host = <DATABASE_IP>
pg_port = 5432
pg_user = kong
pg_password = kong
pg_database = kong
- Run the Kong migrations (only on one node):
kong migrations bootstrap
Step 3: DB-less Mode (Optional)
If you prefer DB-less mode for better scalability and faster failover:
- Use declarative configuration with a YAML file (kong.yml).
- Place the configuration file on both Kong nodes.
- Set the kong.conf file to use DB-less mode:
database = off
declarative_config = /path/to/kong.yml
Step 4: Configure Load Balancer
Set up a load balancer to distribute traffic between the two Kong instances.
Options:
- F5, HAProxy, or NGINX for on-premises environments.
- AWS Elastic Load Balancer (ELB) for cloud-based setups.
Configuration Example:
- Backend pool: Add both Kong instances (Node1_IP and Node2_IP).
- Health checks: Use HTTP health checks to monitor the /status endpoint of Kong.
curl -X GET http://<KONG_INSTANCE>:8001/status
Step 5: Synchronize Configuration Across Nodes
For consistency, ensure both Kong nodes share the same configuration.
DB-Mode Synchronization:
- Configurations are automatically synchronized via the shared database.
DB-less Mode Synchronization:
- Use configuration management tools like Ansible, Terraform, or CI/CD pipelines to deploy the kong.yml file to all nodes.
Step 6: Enable Kong Clustering (Optional)
If using Cassandra as the database, configure Kong clustering for HA.
- Enable clustering in the kong.conf file:
cluster_listen = 0.0.0.0:7946
cluster_listen_rpc = 127.0.0.1:7373
- Ensure that ports 7946 (for gossip communication) and 7373 (for RPC communication) are open between Kong nodes.
Step 7: Configure SSL (TLS)
- Generate SSL certificates for your domain.
- Configure Kong to use these certificates for the gateway.
curl -X POST http://<KONG_ADMIN_API>/certificates \
–data “cert=@/path/to/cert.pem” \
–data “key=@/path/to/key.pem” \
–data “snis[]=example.com”
Step 8: Test the Setup
Health Check:
- Verify the /status endpoint on both nodes:
curl -X GET http://<KONG_NODE_IP>:8001/status
Request Routing:
- Send a test request through the load balancer:
curl -X GET http://<LOAD_BALANCER_IP>:8000/your-api
- Verify logs on both Kong instances to ensure traffic is distributed.
Example HA Diagram
plaintext
Copy code
+———————–+
| Load Balancer |
| (F5, ELB, HAProxy) |
+———-+————+
|
+—————–+—————–+
| |
+———–+———–+ +———–+———–+
| Kong Instance 1 | | Kong Instance 2 |
| (Node1_IP) | | (Node2_IP) |
+———–+————+ +———–+————+
| |
+———–+————+ +———–+————+
| Shared Database (DB-Mode) OR Shared Config File (DB-less Mode)|
+———————–+—————————————-+
Best Practices for Kong HA
- Load Balancer Health Checks:
- Ensure your load balancer only forwards requests to healthy Kong instances.
- Database High Availability:
- Use a clustered database for the shared configuration.
- Monitoring:
- Integrate Kong with monitoring tools (e.g., Prometheus, Grafana) to track performance.
- Rate Limiting:
- Configure plugins for rate limiting to prevent node overload.
- Session Persistence:
- Use sticky sessions if required by your application.
By following these steps, you’ll achieve a robust, highly available Kong Gateway setup. Let me know if you need help with specific configurations!