unable to get local issuer certificate” error 0 proxy

Yes — if the client is behind a proxy, that can definitely cause the “unable to get local issuer certificate” error.

Here’s why:


1. Intercepting / SSL inspection proxy

  • Some proxies (corporate firewalls, Zscaler, BlueCoat, Palo Alto, etc.) terminate TLS, then re-sign the traffic with their own internal CA.
  • The client sees a cert issued by that proxy CA, not the real site.
  • If the client’s trust store doesn’t include that proxy’s root CA, validation fails → “unable to get local issuer certificate”.

2. Forward proxy (CONNECT tunnel)

  • If the proxy just forwards raw TLS (no inspection), usually no issue.
  • But if the proxy alters headers (Host/SNI) or drops part of the certificate chain, the server’s cert may not validate correctly.

3. Transparent proxy / middlebox

  • Some devices “peek” into TLS but don’t pass along the full chain → clients only see the leaf cert.
  • That also leads to the issuer validation error.

How to confirm if proxy is the problem

Run openssl s_client from behind the proxy and compare to running it directly (bypassing proxy):

openssl s_client -connect example.com:443 -servername example.com -showcerts

If the issuer/subject of the leaf cert changes (e.g. you see “Zscaler Root CA” instead of “Let’s Encrypt”), then the proxy is doing TLS interception.


Fixes if proxy is cause

  • Trusted root CA: import the proxy’s root/intermediate CA into the client trust store (system trust, Java cacerts, browser trust store, etc.).
  • Bypass proxy for domains where you need direct trust (if allowed by policy).
  • Update proxy config so it presents the full chain correctly.

👉 Do you want me to show you exactly how to check if your client’s proxy is re-signing the cert (step-by-step with openssl and comparing cert subjects)?

unable to get local issuer certificate

unable to get local issuer certificate” means the TLS verifier can’t build a valid trust chain from the server’s cert → a trusted root. In plain English: the server didn’t send the right intermediate(s), or it’s using a CA your client doesn’t trust (e.g., a private CA or a Cloudflare Origin Certificate seen directly).

Here’s how to fix it fast in a Cloudflare → F5 → Kong → Upstream stack.


1) Identify which hop is failing

Run these from your laptop (replace host):unable to get local issuer certificate

# Client → Cloudflare (public hostname)
openssl s_client -showcerts -servername api.example.com -connect api.example.com:443 </dev/null | sed -n '1,/-END CERTIFICATE-/p'

# Client → F5 (bypass Cloudflare: hit the VIP IP)
openssl s_client -showcerts -connect <F5_PUBLIC_IP>:443 </dev/null

# Direct to Kong node (if it terminates TLS)
openssl s_client -showcerts -connect <KONG_NODE_IP>:8443 </dev/null

# From inside Kong to the upstream (what Kong sees)
docker exec -it kong sh -lc 'apk add --no-cache openssl >/dev/null 2>&1 || true; \
  openssl s_client -showcerts -servername <UPSTREAM_HOST> -connect <UPSTREAM_HOST>:443 </dev/null'

Look at:

  • The presented chain (server cert + intermediates)
  • Verify return code: (should be 0 (ok))

2) Common root causes & specific fixes

A) Missing intermediate on your origin (F5/Kong/upstream)

Symptom: OpenSSL shows only the server cert, or depth=0 ok then fails at depth=1 with this error.

Fix:

  • F5: import the intermediate CA that issued your server cert and set it as the Chain in the Client SSL Profile bound to your VIP (Certificate + Key + Chain). Ensure the chain is complete (server → intermediate(s) → root).
  • Kong terminating TLS (TLS Ingress on 8443): configure cert + key using the full chain bundle (server + intermediates). Most proxies require fullchain.pem (not just cert.pem).
  • Upstream service: install a proper cert and serve the intermediates.

B) You used a Cloudflare Origin Certificate and connected directly to F5

Cloudflare Origin Certs are only trusted by Cloudflare, not by browsers/curl.
Symptom: Works when proxied through Cloudflare (orange cloud), fails when you hit the F5 IP directly.

Fix options:

  • Keep traffic proxied through Cloudflare (don’t bypass); or
  • Install a publicly trusted cert on F5 (e.g., Let’s Encrypt) for direct access.

C) Kong → Upstream uses an internal/private CA

Symptom: Client to Kong is fine; Kong logs show upstream TLS verify errors, or your route returns 500/502.

Fix (Kong Admin API):

  1. Upload your internal CA:
curl -s -X POST :8001/ca_certificates -F cert=@corp-root-or-intermediate.pem
# -> returns {"id":"<CA_ID>", ...}

  1. Attach it to the Service and enforce verification:
curl -s -X PATCH :8001/services/<service_id> \
  -d tls_verify=true \
  -d tls_verify_depth=2 \
  -d ca_certificates[]='<CA_ID>'

(Temporary only: you can set tls_verify=false to prove that CA trust is the issue—then turn it back on with the proper CA.)

D) Wrong SNI / Hostname mismatch

If the cert is for api.example.com but the TLS handshake SNI is different, validation fails.

  • Ensure F5 sends correct SNI when it re-encrypts to Kong/upstream.
  • In Kong Service for HTTPS upstreams, set sni=<upstream-cert-hostname>.

E) Old client trust store / corporate MITM

  • Older clients may lack newer roots (update trust store).
  • If a corporate proxy is intercepting TLS, clients need that proxy’s root in their trust store.

3) Quick “good config” checklist per hop

Cloudflare → F5

  • SSL mode: Full (strict).
  • (Optional) Authenticated Origin Pull: F5 must trust CF Origin Pull CA and require client cert.
  • F5 presents a cert chain your clients would trust when you test direct (unless you never hit it directly).

F5 → Kong

  • If re-encrypting: Server SSL profile with proper SNI to Kong (if Kong expects it). Otherwise send HTTP on a private VLAN to keep it simple.
  • Preserve headers (X-Forwarded-Proto, X-Forwarded-For).

Kong → Upstream

  • For HTTPS upstreams:
    • protocol=https, port=443, sni=<host>, tls_verify=true
    • Attach internal CA certificate if upstream uses a private CA (see step 2C).

4) Don’t “fix” by disabling verification (except to isolate)

  • curl -k, Node’s NODE_TLS_REJECT_UNAUTHORIZED=0, Java -Dcom.sun.net.ssl.checkRevocation=false—use only for a one-off test. The real fix is serving the right chain or installing the proper CA.

5) One-command validator you can paste in tickets

openssl s_client -connect api.example.com:443 -servername api.example.com -verify_return_error </dev/null

You’ll get a clear reason (missing issuer / self-signed / hostname mismatch).


If you tell me which endpoint shows the error (Cloudflare URL, direct F5 VIP, Kong 8443, or Kong→Upstream) and paste the first ~40 lines of the openssl s_client output, I’ll point to the exact certificate/chain file you need to fix.

KONG

Got it — here’s a clean, practical way to wire Client → Cloudflare → F5 → Kong → Upstream so your normal Kong flow
Client → [Kong Route matched] → [Service invoked] → Upstream API works end-to-end.


0) Topology (recommended)

Client ─TLS─> Cloudflare ─TLS─> F5 VIP ─HTTP─> Kong (8000) ─(HTTP/HTTPS)─> Upstream

  • Cloudflare terminates client TLS + WAF/bot/DDOS.
  • F5 is your “origin” for Cloudflare, then load balances across Kong nodes.
  • Kong routes/plugins/auth, then proxies to your upstream.

You can also do F5→Kong over HTTPS (8443). Start with HTTP (simpler), add re-encryption later.


1) Cloudflare (edge) setup

  1. DNS: api.example.com → orange-cloud (proxied) to your F5 public IP.
  2. SSL/TLS: set mode Full (strict).
  3. Authenticated Origin Pull (AOP):
    • Enable it in Cloudflare.
    • On F5, require the Cloudflare client cert (see F5 step) so only CF can hit your VIP.
  4. Origin server certificate (on F5): either
    • a normal public cert (LetsEncrypt, etc.), or
    • a Cloudflare Origin Certificate (valid only to CF; fine if you never bypass CF).
  5. API cache rules: bypass cache for /api/*, enable WebSockets if you use them.

2) F5 LTM (VIP to Kong) essentials

Virtual server (HTTPS on 443) → pool (Kong nodes on 8000)

  • Client SSL profile: present your cert (public or CF Origin Cert).
  • (Optional but recommended) verify Cloudflare client cert for AOP:
    import Cloudflare Origin Pull CA on F5 and set it as Trusted CA; require client cert.
  • HTTP profile: enable; Insert X-Forwarded-For; preserve headers.
  • OneConnect: enable for keep-alives to Kong.
  • Pool members: all Kong nodes, port 8000 (or 8443 if you re-encrypt).
  • Health monitor: HTTP GET to Kong status (see Kong step).

Header hygiene (iRule – optional, if you want to be explicit):

when HTTP_REQUEST {
  # Preserve real client IP from Cloudflare into X-Forwarded-For
  if { [HTTP::header exists "CF-Connecting-IP"] } {
    set cfip [HTTP::header value "CF-Connecting-IP"]
    if { [HTTP::header exists "X-Forwarded-For"] } {
      HTTP::header replace "X-Forwarded-For" "[HTTP::header value X-Forwarded-For], $cfip"
    } else {
      HTTP::header insert "X-Forwarded-For" $cfip
    }
  }
  HTTP::header replace "X-Forwarded-Proto" "https"
}

(Or just enable “X-Forwarded-For: append” in the HTTP profile and set XFP via policy.)


3) Kong Gateway settings (behind F5)

Environment (docker-compose/env vars):

KONG_PROXY_LISTEN=0.0.0.0:8000
# Trust only your F5 addresses/CIDRs (do NOT trust 0.0.0.0/0)
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,<F5_VIP_or_SNATs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on
# helpful during testing
KONG_HEADERS=latency_tokens
# health endpoint for F5 monitor (internal only)
KONG_STATUS_LISTEN=0.0.0.0:8100

F5 health monitor target (HTTP):

  • URL: http://<kong-node>:8100/status
  • Expect: 200 (Kong’s status port; safe to expose internally)

Define Service/Route (example):

# Service to your upstream (HTTP)
curl -sX POST :8001/services \
  -d name=orders-svc \
  -d host=tomcat-app \
  -d port=8080 \
  -d protocol=http

# Route that matches your API path/host
curl -sX POST :8001/routes \
  -d service.name=orders-svc \
  -d hosts[]=api.example.com \
  -d paths[]=/v1

(If upstream is HTTPS, set protocol=https, port=443, and sni=upstream.host.)


4) Putting it together (request path)

  1. Clienthttps://api.example.com/v1/...
  2. Cloudflare terminates TLS, adds CF-Connecting-IP, forwards to F5.
  3. F5 validates CF client cert (AOP), appends XFF, sets X-Forwarded-Proto:https, LB to a Kong node.
  4. Kong trusts F5 IP, extracts the real client IP from XFF, matches Route (Host=api.example.com, path=/v1), invokes Service, proxies to Upstream.
  5. Response flows back; Kong adds latency headers (if enabled); F5 returns to CF; CF returns to client.

5) Testing (in layers)

  • Direct to Kong (bypass CF/F5) on the private network: curl -i -H 'Host: api.example.com' http://<kong-node>:8000/v1/ping -H 'Kong-Debug: 1'
  • Through F5 VIP (simulate Cloudflare off): curl -i -H 'Host: api.example.com' https://<f5-public-ip>/v1/ping --resolve api.example.com:<f5-public-ip>:443
  • Through Cloudflare (real path): curl -i https://api.example.com/v1/ping -H 'Kong-Debug: 1'

Check headers:

  • X-Kong-Upstream-Latency (upstream time)
  • X-Kong-Proxy-Latency (Kong internal)
  • Logs in Kong show client_ip = real client IP (not F5/CF) if trusted IPs are correct.

6) Observability (recommended)

  • Kong plugins:
    • prometheus (metrics), correlation-id (X-Request-ID), http-log or file-log → ELK.
  • F5: request logging profile to ELK; LTM stats.
  • Cloudflare: Security events/Edge logs (if plan allows).

7) Common gotchas (and quick fixes)

  • Client IP shows F5/Cloudflare → set KONG_TRUSTED_IPS, REAL_IP_*; ensure F5 appends XFF and passes CF-Connecting-IP.
  • OIDC header lost → make sure F5 policy does not strip Authorization.
  • Wrong redirects / mixed content → ensure X-Forwarded-Proto: https reaches Kong & upstream.
  • “Server busy” / spikes → raise F5 OneConnect pool, ensure Kong backlog/somaxconn, keepalive to upstream.
  • Health checks hitting Admin API → use KONG_STATUS_LISTEN (8100) instead, not port 8001 (Admin).

If you want, I can:

  • generate a Kong docker-compose block with those envs,
  • give you an F5 monitor config snippet,
  • and add a sample OIDC plugin + rate-limit plugin to the route for a production-like baseline.

Cloudflare → F5 → Kong API stack

Here’s the clean mental model for a Cloudflare → F5 → Kong API stack, plus the key headers/TLS choices and the minimum config knobs so it “just works.”

1) The traffic flow (happy path)

Client ─(HTTPS/HTTP2/3)→ Cloudflare ─(TLS to origin)→ F5 VIP ─(HTTP/HTTPS)→ Kong cluster ─→ Upstream services

  1. Cloudflare terminates the client TLS (always, if the orange-cloud proxy is on).
    • Applies WAF, DDoS, bot rules, rate limits, geo rules, etc.
    • Forwards to your F5 VIP as the “origin”.
  2. F5 LTM receives Cloudflare’s request.
    • Usually terminates TLS again (re-encrypt to Kong or send plain HTTP on the inside).
    • Load-balances across Kong nodes (pool members in Zone A/B).
  3. Kong Gateway routes by Host/path to your backend (service), runs plugins (OIDC, rate-limit, etc.), and proxies to the upstream.

2) TLS choices (pick one per hop)

Cloudflare → F5 (origin TLS):

  • Set Cloudflare SSL mode to Full (strict).
  • Enable Authenticated Origin Pull so only Cloudflare can hit F5.
  • On F5, trust Cloudflare’s Origin Pull CA and require client cert.

F5 → Kong:

  • Simple: terminate on F5 and send HTTP to Kong on the private VLAN.
  • End-to-end TLS: client-SSL on F5, server-SSL from F5 to Kong (re-encrypt), SNI kong.internal (or node name).

Kong → Upstream:

  • Match your upstream: protocol=http|https, SNI if TLS, optionally mTLS to sensitive services.

3) Real client IP (do this or logs/limits will be wrong)

Cloudflare sets:

  • CF-Connecting-IP (client IP)
  • X-Forwarded-For (appends client IP)
  • X-Forwarded-Proto: https

F5 should preserve (not overwrite) X-Forwarded-For and pass X-Forwarded-Proto.

Kong must trust the proxy chain so it can compute the real client IP:

  • Set (env or kong.conf):
    • KONG_TRUSTED_IPS=<F5 private CIDRs or F5 VIPs> (don’t trust 0.0.0.0/0)
    • KONG_REAL_IP_HEADER=X-Forwarded-For
    • KONG_REAL_IP_RECURSIVE=on
  • Then client_ip in logs, rate-limit/correlation will be the actual user IP from Cloudflare.

If you prefer using Cloudflare’s header explicitly, you can have F5 copy CF-Connecting-IP into the leftmost position of X-Forwarded-For.

4) Load balancing & health checks (avoid double “mystery” failover)

  • Cloudflare (optional LB): usually point it at a single F5 VIP per region and let F5 do node health.
  • F5 → Kong nodes: HTTP health monitor (e.g., GET /status/health on each Kong).
  • Kong → upstreams: use Kong Upstreams/Targets with active + passive health checks to eject bad app pods.

Pick one layer to be the source of truth per hop (Cloudflare LB or F5, Kong or upstream LB) to avoid contradictory decisions.

5) Protocols & connections

  • HTTP versions: client can be HTTP/2 or HTTP/3 to Cloudflare. Cloudflare→F5 is HTTP/1.1 or HTTP/2 (CF may downgrade). F5→Kong is typically HTTP/1.1.
  • Keep-alive: enable OneConnect on F5 and keep-alive to Kong to avoid connection churn.
  • WebSockets/gRPC: supported end-to-end; ensure Upgrade/HTTP2 is enabled through F5 and Kong Routes/Services.

6) Minimal config snippets

F5 (HTTP profile / header handling):

  • Enable “Insert X-Forwarded-For” (or an iRule to append not overwrite).
  • Preserve X-Forwarded-Proto = https.
  • If using Authenticated Origin Pull: client-SSL requires CF client cert; trust CF Origin CA.

Kong (env):

KONG_HEADERS=latency_tokens
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16, <your F5 VIPs/CIDRs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on

Kong Service (HTTPS upstream with SNI):

protocol: https
host: api.internal.example
port: 443
tls_verify: true
sni: api.internal.example

Kong rate limiting behind proxies: use consumer or ip policy; with trusted IPs set, ip uses the real client.

7) Observability (what to turn on)

  • Cloudflare: (if plan allows) Edge logs / Security events (attack/waf/bot).
  • F5: LTM logs + request logging profiles; export to ELK.
  • Kong: enable Prometheus plugin, correlation-id, and http-log/file-log to ELK.
  • Make sure your ELK/Loki sees client_ip, X-Request-ID, service, route, upstream_status, latencies.{kong,proxy,request}.

8) Common pitfalls & quick fixes

  • Client IP shows F5/Cloudflare: set KONG_TRUSTED_IPS and KONG_REAL_IP_* as above; ensure F5 appends—not overwrites—X-Forwarded-For.
  • OIDC 401s behind CF/F5: your LB stripped Authorization. Validate with an echo upstream; ensure F5 leaves Authorization intact.
  • Mixed-content / wrong redirects: preserve X-Forwarded-Proto: https through to Kong and upstream so apps build HTTPS URLs.
  • Header too large (Set-Cookie): raise Kong Nginx proxy buffers if you see upstream sent too big header.
  • Double LB flaps: don’t health-check the same hop in both Cloudflare and F5 with different intervals/paths.

9) Example end-to-end (typical, secure)

  1. Cloudflare Full (strict) + Authenticated Origin Pull → F5 public VIP api.example.com:443.
  2. F5 terminates TLS, adds XFF if missing, passes X-Forwarded-Proto: https, LB to kongA:8000, kongB:8000.
  3. Kong trusts F5 IPs, pulls real client IP from XFF, runs OIDC/rate-limit, sends to upstream (HTTPS with SNI).
  4. Logs/metrics from all 3 land in ELK/Grafana.

If you want, I can drop a ready-to-run Kong docker-compose with those envs set, plus example F5 monitor path and a Kong Service/Route pointing to your php/tomcat containers.

ClientMTLSAuth status absent

It means no client certificate was presented/validated on the TLS handshake—i.e., client mTLS auth did not happen for that request.

Think of it like Nginx’s $ssl_client_verify = NONE: the server (Cloudflare/F5/Kong) didn’t see a client cert, so mTLS status is absent.

Why it happens

  • Client didn’t send a cert (missing --cert/--key in curl, or browser/keystore not configured).
  • mTLS not required on that hop (e.g., F5/Cloudflare set to “request/ignore” instead of “require”; Kong’s mtls-auth plugin not enabled).
  • TLS terminates before Kong (Cloudflare/F5 terminates TLS; the origin can’t see the client cert unless that proxy enforces mTLS itself).
  • Wrong host/port/SNI that doesn’t have the mTLS policy.

Quick checks

# PEM pair
curl -vk https://api.example.com/path \
  --cert client.crt --key client.key

# or PKCS#12
curl -vk https://api.example.com/path \
  --cert-type P12 --cert client.p12:password

Look for SSL client certificate lines in the verbose output.

  • On Cloudflare/F5:
    • Require client certs on the relevant hostname/path.
    • Ensure the CA used to validate client certs is uploaded/selected.
    • If using Cloudflare, remember: the client’s cert terminates at Cloudflare; the origin (Kong) won’t receive it. Enforce mTLS at Cloudflare edge and use origin controls (e.g., Authenticated Origin Pulls) separately.
  • On Kong:
    • If using Enterprise mtls-auth plugin, enable it on the route/service and upload your CA certificate to /ca-certificates; reference it in the plugin.
    • If relying on Nginx-level mTLS, ensure the proxy listener is configured with ssl_client_certificate and ssl_verify_client on; (or the equivalent in your deployment).

Interpreting statuses

  • absent = no cert presented.
  • failed/invalid = cert presented but couldn’t be validated (untrusted/expired/CN mismatch).
  • success/verified = cert was validated.

If you share how traffic flows (Client → Cloudflare/F5 → Kong) and where you intend to enforce mTLS, I’ll give you the exact config and a working curl for that hop.

kong 500

Good question 👍 — let’s break down upstream_status=500 in Kong.


🔹 What it means

  • 500 = Internal Server Error returned by your upstream service, not Kong.
  • Kong successfully forwarded the request to the backend, got a real HTTP response, and passed that status back.
  • So unlike 502/503/504 (which usually mean Kong couldn’t reach the upstream), a 500 means Kong did reach it, but the app/server itself failed internally.

🔹 Common causes

  • Unhandled exception in the app (backend crashed while processing request).
  • Bad input from client → app logic throws error instead of handling gracefully.
  • Dependency failure in upstream (e.g., DB connection error, null pointer, etc.).
  • Misrouted traffic (Kong points to wrong host/port → some service returns 500).
  • Authentication mismatch (upstream expected something different, failed internally).

🔹 How to verify it’s the upstream

  1. Check Kong’s access log: status=500 upstream_status=500 → Both match = the error truly comes from upstream. status=401 upstream_status=200 → Kong overrode the upstream result (plugin decision).
  2. If you’re unsure, temporarily point the route directly to a test echo server (httpbin or mockbin) and re-run → if 500 disappears, your real upstream is the culprit.

🔹 Troubleshooting steps

  • Direct test upstream (bypass Kong): curl -i http://<upstream_host>:<port>/<path> If you still get 500 → it’s an upstream bug.
  • Enable upstream error logging in Kong (in debug mode or use a logging plugin → Splunk/ELK).
  • Check app logs on the backend service for stack traces.
  • If you see status=502 upstream_status=- → that’s different (connection failed, no response).

✅ So, in short:
upstream_status=500 means your app/service returned a 500 → Kong didn’t block it, just proxied it.


Do you want me to also list the difference between 500 vs 502 vs 503 vs 504 in Kong upstream_status, so you can quickly tell where the problem is (Kong vs upstream)?

Kong latency 2

Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.

kong latency

Short answer: there isn’t a single built-in “average response time” value for Kong. You measure it from Kong’s latency telemetry and then compute the average. Kong exposes three latencies:

  • Request latency (a.k.a. total): time from first byte in to last byte out.
  • Upstream latency: time the upstream took to start responding.
  • Kong latency: time spent inside Kong (routing + plugins). (Kong Docs)

Below are the quickest ways to get the average.


Option 1 — Prometheus plugin (fastest for averages/percentiles)

Enable the Prometheus plugin with config.latency_metrics=true. It exposes histograms:

  • kong_request_latency_ms_* (total)
  • kong_upstream_latency_ms_*
  • kong_kong_latency_ms_* (Kong Docs)

PromQL examples (last 5 minutes):

# Average total response time
sum(rate(kong_request_latency_ms_sum[5m]))
/
sum(rate(kong_request_latency_ms_count[5m]))

# Average per service
sum by (service) (rate(kong_request_latency_ms_sum[5m]))
/
sum by (service) (rate(kong_request_latency_ms_count[5m]))

# p95 latency (total)
histogram_quantile(
  0.95,
  sum by (le) (rate(kong_request_latency_ms_bucket[5m]))
)

Swap request for upstream or kong in the metric names to break the average down by where time is spent. (Kong Docs)


Option 2 — ELK/Kibana (since you’re using ELK)

Use a log plugin (e.g., File Log or HTTP Log) and ship the JSON to Elasticsearch. Each log line contains:

"latencies": {
  "request": <total_ms>,
  "proxy":   <upstream_ms>,
  "kong":    <kong_ms>
}

In Kibana → Discover/Lens, set Average over latencies.request (or latencies.proxy / latencies.kong) to see the mean response time. (Kong Docs)


Option 3 — Quick curl check (headers)

Kong can add latency headers:

  • X-Kong-Proxy-Latency (≈ Kong time)
  • X-Kong-Upstream-Latency (≈ upstream time)
  • X-Kong-Total-Latency (total; with advanced tokens)

Enable with KONG_HEADERS=latency_tokens (or advanced_latency_tokens) and inspect with curl -i. (Kong Docs)


What’s a “good” number?

It’s workload-specific. Treat Kong latency as your gateway overhead (aim for low single-digit ms) and focus tuning on upstream latency (usually the dominant share). Use p95/p99 alongside the average to catch tail latency. (Kong Docs)

If you want, I can drop a Grafana panel (PromQL prewired) and a Kibana Lens example targeting latencies.request so you can paste them into your stack.

error kong

tail -f /usr/local/kong/logs/error.log | egrep -i ‘oidc|jwt|introspect|jwks|bearer|signature|aud|iss’
Typical log clues → fixes:

Got it—here’s a fast, practical playbook for “invalid bearer token” in a Kong + Ping Identity setup (with F5 in front).

Quick triage (60‑second checks)

  1. Header format
    • Must be exactly: Authorization: Bearer <access_token> (case‑sensitive “Bearer”, single space, no quotes).
  2. Right token type
    • Use the Access Token, not the ID token.
  3. Header delivery
    • Ensure F5/LB isn’t stripping Authorization. Hit Kong directly to compare.
# Direct to Kong (bypass F5 if you can)
curl -si https://kong.example.com/api \
  -H "Authorization: Bearer $AT"

Validate the JWT (if your AT is JWT)

1) Is it a JWT?

A JWT has two dots: xxxxx.yyyyy.zzzzz. If not, it’s likely opaque → skip to “Opaque tokens”.

2) Decode & inspect claims

# header
cut -d. -f1 <<<"$AT" | base64 -d 2>/dev/null | jq
# payload
cut -d. -f2 <<<"$AT" | base64 -d 2>/dev/null | jq

Check:

  • iss equals your Ping issuer in Kong config
  • aud includes your API audience expected by Kong
  • exp is in the future; nbf is not in the future (watch clock skew)
  • scope / roles include what your route requires

3) Signature / key material

From the JWT header (kid, alg) confirm Kong can fetch the matching JWKS:

  • Kong must reach Ping’s /.well-known/openid-configurationjwks_uri
  • No corporate proxy/DNS blocking
  • alg matches what the IdP issues (e.g., RS256)

Validate with Kong

Turn on debug temporarily and reproduce once:

KONG_LOG_LEVEL=debug kong restart
tail -f /usr/local/kong/logs/error.log | egrep -i 'oidc|jwt|introspect|jwks|bearer|signature|aud|iss'

Typical log clues → fixes:

  • signature verification failed → wrong JWKS, stale kid, or mismatched alg.
  • token expiredexp passed; consider small clock_skew_leeway.
  • audience not allowed / aud mismatch → adjust audience in IdP or audience_claim/allowed_audiences in plugin.
  • issuer not allowed → fix issuer (exact string match).
  • could not get jwks → networking/proxy or TLS trust to Ping.

Common F5/LB gotchas

  • Missing Authorization header forwarding. On F5, ensure policies/iRules preserve it.
  • Rewriting to lower/upper case header names is fine; removing isn’t.

Opaque tokens (non‑JWT access tokens)

If the access token looks random (no dots), enable introspection:

  • Kong openid-connect plugin must have introspection_endpoint, client_id, client_secret.
  • Kong needs network access to Ping’s introspection endpoint.
  • Expect log lines: introspection active / token active:false if invalid or revoked.

Kong plugin configuration — key items to review

# For openid-connect plugin (per service/route)
config:
  issuer: "https://ping.example.com/as"
  discovery: "https://ping.example.com/as/.well-known/openid-configuration"
  # If using opaque tokens:
  introspection_endpoint: "https://ping.example.com/as/introspect.oauth2"
  client_id: "<kong-oidc-client>"
  client_secret: "<secret>"
  # Validation:
  verify_signature: true
  verify_claims: true
  allowed_audiences:
    - "api://my-api"
  scopes_required:
    - "api.read"
  # Optional:
  clock_skew_leeway: 60
  bearer_only: true   # if you don’t want redirects (API-only)

Upstream expectations (post‑auth)

Decide what the upstream needs:

  • Forward original token: config.upstream_access_token_header = "Authorization" (default)
  • Or forward derived identity headers (customize via config.upstream_headers_claims).
    Ensure your upstream checks the same audience/scope semantics (or trusts Kong’s verdict and uses headers only).

Quick curl matrix

# Good: Access token with proper prefix
curl -H "Authorization: Bearer $AT" https://api.example.com/orders

# Bad: ID token by mistake
curl -H "Authorization: Bearer $IDT" https://api.example.com/orders   # expect 401

# Test over F5 vs direct Kong
curl -si -H "Authorization: Bearer $AT" https://edge.example.com/orders
curl -si -H "Authorization: Bearer $AT" https://kong.example.com/orders

Frequent root causes & fixes

  • Typo / missing “Bearer ” → fix header format.
  • Using ID token → request AT (auth code or client credentials).
  • Expired token → renew; verify system clocks (NTP) on F5, Kong, and upstream.
  • Issuer/audience mismatch → align Ping client + API resources with Kong allowed_audiences and issuer.
  • JWKS fetch failure → open egress, fix proxy/TLS trust, or pin jwks_uri.
  • Opaque token with no introspection configured → enable introspection_endpoint + client creds.
  • F5 stripped header → preserve Authorization.
  • Scope missing → include required scopes in token request or relax scopes_required (if appropriate).

If you paste a redacted sample token header/payload (no signature needed) and your current openid-connect plugin snippet, I’ll pinpoint exactly which check is failing and provide the minimal config change or Ping policy tweak.

Kong – no suitable authorization credentials were provided

The error “no suitable authorization credentials were provided” usually means Kong (or an upstream service) expected authentication information, but it either:

  • Didn’t receive it at all
  • Received it in the wrong format
  • Received it in the wrong location (header vs. query parameter)
  • Couldn’t match it to any configured credentials

Here’s how to troubleshoot it step-by-step:


1. Identify which plugin or service is throwing the error

  • Kong Plugin Layer – Often caused by key-auth, jwt, or openid-connect plugin when no valid credential is passed.
  • Upstream Service Layer – Could be from the API/backend if Kong passed the request without credentials.
  • Load Balancer / F5 – Sometimes strips or alters headers, so credentials never reach Kong.

2. Check the authentication type you’re using

  • API Key (key-auth plugin)
    • Expected: apikey in ?query or apikey in header (apikey: <key>).
    • Verify: curl -H "apikey: YOUR_API_KEY" https://your-kong-endpoint/your-api
  • JWT Plugin
    • Expected: Authorization: Bearer <jwt_token> header.
    • Token must be valid and match a configured consumer.
  • OpenID Connect (openid-connect plugin)
    • Expected: Valid ID/Access token, usually Authorization: Bearer <access_token>.
    • Token must be signed by a trusted issuer in plugin config.

3. Verify headers are not stripped

If you have F5 / Nginx / Apache in front of Kong:

  • Check that Authorization headers are forwarded.
  • In F5, enable HTTP::header Authorization forwarding.
  • In Nginx, ensure: proxy_set_header Authorization $http_authorization;

4. Check Kong logs for details

Run Kong in debug mode temporarily:

KONG_LOG_LEVEL=debug kong restart
tail -f /usr/local/kong/logs/error.log

Look for:

  • plugin=... lines showing missing credentials
  • no credentials found for given consumer/issuer

5. Confirm consumer mapping

For jwt or oidc:

  • The public key / JWKS URL must be correct.
  • The iss (issuer) in token must match plugin config.
  • The consumer must be linked to the credential.

💡 Common causes I’ve seen in your past scenarios with F5 + Kong + Ping Identity:

  • F5 stripping Authorization header before reaching Kong.
  • Wrong token type (ID token instead of Access token) in OIDC flow.
  • Consumer not mapped in Kong for the provided JWT/OIDC token.
  • client_id and client_secret missing in a token request.

If you want, I can draw you a flow diagram for Kong + Ping Identity showing exactly where the credential might be getting lost in your setup with F5 in the middle. That would make pinpointing the issue much faster.