Yes — F5 (BIG-IP or similar load balancer) in front of Kong can definitely contribute to or cause a 500 Internal Server Error, depending on how it’s configured.
✅ Scenarios Where F5 Can Cause 500 Errors in Kong
Scenario
Explanation
How It Leads to 500
Improper header rewrites
F5 strips or rewrites headers like Host, Authorization, X-Forwarded-*
Kong plugins (e.g., OIDC, JWT) fail internally
SSL offloading with missing SNI
F5 terminates TLS and doesn’t forward proper SNI or client cert info
Kong mTLS/auth plugins crash
Incorrect HTTP method handling
F5 mishandles certain HTTP methods (e.g., PATCH, OPTIONS)
Kong routes fail or misroute
Request body corruption
F5 changes chunked encoding or breaks body format
Kong/Lua fails to parse body
Timeouts/retries
F5 retry logic sends malformed or duplicate requests
Kong misinterprets retries, triggers internal logic error
🔍 What You Can Check
Preserve Headers:
Ensure F5 forwards: Host, Authorization, X-Forwarded-For, X-Forwarded-Proto, etc.
SSL Passthrough or Proper Re-Termination:
If you terminate SSL at F5, ensure Kong receives expected headers (X-Forwarded-Proto=https, etc.)
If Kong expects client certs (mTLS), make sure F5 passes them correctly.
Check Kong Logs: Look for header-related Lua errors or nil value dereferencing: attempt to index a nil value
F5 Request Logs: Check the exact format F5 sends the request in (method, headers, body).
Run curl directly to Kong (bypass F5): curl -v http://<Kong-IP>:8000/<your-path> If it works directly but fails via F5, then F5 is the culprit.
✅ Fixes and Tips
In F5:
Enable X-Forwarded-For, X-Forwarded-Proto, and Host headers.
Disable chunking if Kong or upstream doesn’t support it.
Avoid rewriting or stripping headers unless needed.
In Kong:
Enable trusted_ips and configure real_ip_header if you’re using X-Forwarded-For.
If you show me:
A sample request through F5 (e.g., with curl -v)
Any Kong plugin you’re using (OIDC, JWT, Rate Limiting, etc.)
A redacted error log line
…I can help pinpoint whether F5 is definitely the cause.
The error “This certificate cannot be verified” typically means that the client system cannot validate the certificate chain. Here are the most common causes and how to fix them:
🔍 Root Causes and Fixes
1. Missing Intermediate or Root CA
Cause: The certificate presented by the server is not trusted because the issuing authority (CA) is not in the client’s trusted store.
Fix:
Ensure the full certificate chain (leaf + intermediate + root) is provided.
In web servers or API gateways (like Kong), configure the full chain in the cert file: cat server.crt intermediate.crt > fullchain.crt And use fullchain.crt as your public certificate.
2. Self-Signed Certificate Not Trusted
Cause: If it’s a self-signed cert and the CA cert is not installed on the client machine.
Fix:
Manually install the root CA certificate on the client:
Here’s a clear breakdown of how authentication works in a Kong + Ping Identity + IAM setup, which is common in enterprise environments where Kong acts as an API gateway and Ping Identity (like PingFederate/PingOne) handles user authentication and token issuance.
HIGH-LEVEL AUTH FLOW
Scenario:
Client → Kong Gateway → Protected Service Kong integrates with Ping Identity for OIDC authentication or JWT validation, often backed by a central IAM system.
COMPONENT ROLES
Component
Role
Kong Gateway
API gateway that enforces authentication & authorization plugins
Ping Identity
Identity Provider (IdP) – handles login, token issuance, and federation
IAM (e.g., Ping IAM, LDAP, AD)
Stores users, groups, permissions, policies
AUTHENTICATION FLOW (OIDC Plugin)
OpenID Connect (Authorization Code Flow):
Client → Kong Tries to access a protected API.
Kong (OIDC Plugin) Redirects client to Ping Identity (Authorization Endpoint).
Ping Identity (PingFederate or PingOne)
Authenticates user (UI, MFA, etc.).
Issues authorization_code.
Kong → Ping Identity (Token Endpoint) Exchanges code for access_token (and optionally id_token, refresh_token).
Kong (OIDC Plugin)
Validates token.
Optionally maps user to Kong consumer.
Passes request to backend with enriched headers (e.g., X-Consumer-Username).
Backend Service Receives authenticated request with headers.
JWT Token Validation (Alt. Flow)
If Ping Identity issues JWT tokens, you can use Kong’s JWT plugin to validate them without redirect:
Client gets access_token from Ping (out of band or SPA).
Client sends request with Authorization: Bearer <token>.
Yes, 20 TPS (Transactions Per Second) is considered low for most production workloads — unless:
✅ When 20 TPS is reasonable:
You’re testing or doing a soft launch.
The API serves high-cost or sensitive operations (e.g., financial transactions).
You want to protect legacy backends that can’t handle high concurrency.
You’re applying it per client (per consumer) rather than globally.
❗ When 20 TPS is too low:
You’re limiting at the global or service level and expect hundreds or thousands of users.
Your application is public-facing, and users experience 429 errors (rate limit exceeded).
You’re throttling batch jobs, mobile apps, or systems that burst above 20 TPS.
📊 Typical API Rate Benchmarks:
Use Case
Typical Rate
Internal microservice
100–10,000+ TPS
Public API (freemium)
50–100 TPS per key
Critical API (payments, auth)
5–50 TPS with burst handling
Global rate limit
1,000+ TPS
🔁 Recommendation:
Set burst + sustained limits, e.g.,:
config.second=20
config.hour=72_000 (20 * 3600)
Use Redis policy for accurate global limits across nodes.
Add consumer-based limits if needed.
Would you like a rate-limiting strategy per user tier (e.g., free vs. premium)?
Here’s a practical rate-limiting strategy per user tier (e.g., Free, Standard, Premium) using Kong Gateway, suitable for API plans on AWS or anywhere Kong is deployed.
Here’s a practical rate-limiting strategy per user tier (e.g., Free, Standard, Premium) using Kong Gateway, suitable for API plans on AWS or anywhere Kong is deployed.
The “upstream status 500” in Kong Gateway means that Kong successfully forwarded the client request to the upstream service (your backend/API), but the upstream service itself responded with a 500 Internal Server Error.
🔍 What this typically indicates:
The issue is with the upstream service, not Kong.
Kong acted as a proxy and did its job correctly; the 500 came from your backend application.
If you’re getting 401 and 400 intermittently from the upstream service, that strongly suggests authentication-related issues, often tied to token forwarding, expiration, or format mismatch.
🔁 Quick Summary of Key Differences
Status
Meaning
Common Cause
401
Unauthorized
Missing/invalid/expired credentials
400
Bad Request
Malformed or incomplete request (e.g. OIDC token request)
🧠 Intermittent 401 + 400: Common Root Causes
🔸 1. Expired or Reused Tokens
Kong gets a token once, caches it, and keeps using it—but upstream expects a fresh one.
Especially common with client credentials or authorization code flows.
Solution:
Set token caching to short duration or disable it in the OIDC plugin: config: cache_ttl: 0 # Or a very short TTL like 5
🔸 2. Multiple Consumers with Invalid Secrets
One client (consumer) is configured correctly, others are not.
You see 401/400 when the bad client makes a request.
Solution:
Enable verbose logging in Kong: export KONG_LOG_LEVEL=debug kong reload Then correlate consumer_id with the error.
🔸 3. Kong Not Forwarding Tokens Correctly
Kong authenticates but doesn’t forward Authorization header to the upstream.
Some plugins strip headers by default.
Solution:
Add request-transformer plugin to pass the token: curl -X POST http://localhost:8001/services/YOUR_SERVICE/plugins \ --data "name=request-transformer" \ --data "config.add.headers=Authorization:Bearer $(jwt_token)"
🔸 4. OIDC Plugin Misconfiguration
If you’re using the OpenID Connect plugin:
grant_type, client_id, or redirect_uri may be wrong or missing intermittently.
Kong might request a new token but fail to pass a correct one.
In Kong Gateway, you cannot have two services bound to the exact same route — a route must be unique in terms of its combination of matching rules (such as paths, hosts, methods, etc.).
🚫 Why You Can’t Have Duplicate Routes
Kong uses the route’s matching criteria to determine which service should handle a request. If two routes have the same matching rules, Kong can’t disambiguate, which leads to a conflict error when trying to create the second route.
For example:
# Service A
curl -i -X POST http://localhost:8001/services/ --data name=service-a --data url=http://api-a
curl -i -X POST http://localhost:8001/routes --data name=route-a --data service.name=service-a --data paths[]=/api
# Service B
curl -i -X POST http://localhost:8001/services/ --data name=service-b --data url=http://api-b
curl -i -X POST http://localhost:8001/routes --data name=route-b --data service.name=service-b --data paths[]=/api
⛔ The second POST /routes will fail with:
HTTP 409 Conflict – duplicate entry
✅ Workarounds / Alternatives
Goal
Solution
Blue/Green or Canary Deployments
Use Kong Plugins (e.g., traffic-split, canary) or upstreams and targets instead of duplicate routes.
Different consumers/users hitting different backends
Use request transformers, ACLs, or Kong Enterprise Route-by-header/Route-by-consumer plugins.
Same path, different method or host
You can differentiate routes by methods[], hosts[], or headers.
🧪 Example: Two Routes with Same Path, Different Hosts
These can coexist because their hosts[] fields are different.
🧠 Summary
Route Matching Rule
Must Be Unique For
paths[]
Same host/methods/headers
hosts[]
If combined with same path
methods[], headers
Can disambiguate routes with same path
If you’re trying to achieve load balancing, blue-green deployment, or AB testing between services at the same route — I can help you set that up using upstreams + targets or the traffic-split plugin.
Yes, when using Kong Konnect with an OIDC (OpenID Connect) client, SSL (TLS) is typically required in production environments for both security and functional reasons. Here’s a breakdown of why:
✅ Why SSL/TLS is Required for OIDC with Kong Konnect
1. OIDC Specification Requires HTTPS
The OIDC standard mandates that communication between clients and identity providers (IdPs) must use HTTPS to protect tokens and credentials.
For example, the redirect_uri used in the OIDC flow must be HTTPS, or most IdPs will reject it.
2. Kong Konnect’s OIDC Plugin Requires HTTPS for Secure Redirects
If you’re using the Kong OIDC plugin, especially in Authorization Code Flow, Kong must redirect users to the IdP and then receive the callback securely.
Without SSL, the redirect URI may be considered invalid or insecure by the IdP.
Load balancer or firewall in between is blocking/delaying
If you’re seeing upstream_status=504 in Kong logs, it means:
Kong sent the request to the upstream (your backend service), but did not receive a response within the configured timeout, so it returned a 504 Gateway Timeout to the client.
🔍 What to Check When You See upstream_status=504
✅ 1. Verify Upstream is Healthy
Try to access your upstream from the Kong host directly:
curl -v http://<upstream-host>:<port>/<endpoint>
If this is slow or hangs → your backend is the problem.
If it fails to connect → network or DNS issue.
✅ 2. Increase Kong Timeouts
Timeouts are per-service. You can increase them using the Admin API: