cert error

That error means the client sees a cert chain that includes a CA it doesn’t trust—most commonly a corporate proxy’s self-signed root (TLS inspection) or a misconfigured server chain. Here’s the fastest way to fix it given your AWS client + proxy setup.

0) Quick proof (run behind the proxy)

# Shows who signed what the client actually sees
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts -verify_return_error </dev/null \
| openssl x509 -noout -subject -issuer -enddate

  • If the issuer is your company/Zscaler/BlueCoat/etc → it’s proxy interception.
  • If the issuer is a normal public CA but the chain still fails → your server/edge isn’t sending the proper intermediates.

how to integrate token in java call

Gotcha—here are quick, copy-paste ways to add an OIDC/JWT Bearer token to Java HTTP calls. Pick the stack you use.


Java 11+ HttpClient

var client = java.net.http.HttpClient.newHttpClient();
String token = "eyJ..."; // put your access token here

var req = java.net.http.HttpRequest.newBuilder()
    .uri(URI.create("https://api.example.com/resource"))
    .header("Authorization", "Bearer " + token)
    .header("Content-Type", "application/json")
    .POST(java.net.http.HttpRequest.BodyPublishers.ofString("{\"ping\":\"pong\"}"))
    .build();

var res = client.send(req, java.net.http.HttpResponse.BodyHandlers.ofString());
System.out.println(res.statusCode() + " " + res.body());


OkHttp (popular + simple)

OkHttpClient client = new OkHttpClient.Builder()
    .addInterceptor(chain -> {
        String token = "eyJ..."; // inject per request or from a provider
        Request req = chain.request().newBuilder()
            .addHeader("Authorization", "Bearer " + token)
            .build();
        return chain.proceed(req);
    })
    .build();

Request request = new Request.Builder()
    .url("https://api.example.com/resource")
    .post(RequestBody.create("{\"ping\":\"pong\"}", MediaType.get("application/json")))
    .build();

try (Response resp = client.newCall(request).execute()) {
    System.out.println(resp.code());
}


Apache HttpClient 5

var httpClient = HttpClients.custom()
    .addRequestInterceptorFirst((HttpRequest request, EntityDetails entity, HttpContext ctx) -> {
        request.addHeader("Authorization", "Bearer " + "eyJ...");
    })
    .build();

var post = new HttpPost("https://api.example.com/resource");
post.setHeader(HttpHeaders.CONTENT_TYPE, "application/json");
post.setEntity(new StringEntity("{\"ping\":\"pong\"}", StandardCharsets.UTF_8));

try (var resp = httpClient.execute(post)) {
    System.out.println(resp.getCode());
}


Spring (WebClient) — preferred in Spring Boot

@Bean
WebClient webClient() {
  return WebClient.builder()
      .filter((req, next) -> {
        String token = "eyJ..."; // inject from a bean that caches/refreshes
        ClientRequest r = ClientRequest.from(req)
            .header(HttpHeaders.AUTHORIZATION, "Bearer " + token).build();
        return next.exchange(r);
      })
      .build();
}

// use it
webClient().post().uri("https://api.example.com/resource")
  .contentType(MediaType.APPLICATION_JSON)
  .bodyValue(Map.of("ping","pong"))
  .retrieve().toEntity(String.class).block();


Spring (RestTemplate)

RestTemplate rt = new RestTemplate();
rt.getInterceptors().add((req, body, ex) -> {
  req.getHeaders().setBearerAuth("eyJ...");
  return ex.execute(req, body);
});
ResponseEntity<String> resp = rt.getForEntity("https://api.example.com/resource", String.class);


Feign (OpenFeign)

@Bean
public RequestInterceptor bearerAuth() {
  return template -> template.header("Authorization", "Bearer " + "eyJ...");
}


JAX-WS / SOAP (header example)

SOAP 1.1 often also needs SOAPAction, but the Bearer goes in HTTP headers:

BindingProvider bp = (BindingProvider) port;
Map<String, List<String>> headers = new HashMap<>();
headers.put("Authorization", List.of("Bearer eyJ..."));
bp.getRequestContext().put(MessageContext.HTTP_REQUEST_HEADERS, headers);


Getting the token (Ping/OIDC) in Java (client-credentials)

var client = HttpClient.newHttpClient();
var form = URLEncoder.encode("grant_type","UTF-8") + "=client_credentials" +
           "&" + URLEncoder.encode("client_id","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_ID"),"UTF-8") +
           "&" + URLEncoder.encode("client_secret","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_SECRET"),"UTF-8");

var req = HttpRequest.newBuilder(URI.create("https://idp.example.com/oauth2/token"))
    .header("Content-Type", "application/x-www-form-urlencoded")
    .POST(HttpRequest.BodyPublishers.ofString(form))
    .build();

var res = client.send(req, HttpResponse.BodyHandlers.ofString());
String token = new org.json.JSONObject(res.body()).getString("access_token");


Pro tips (Kong/Ping friendly)

  • Always send Authorization: Bearer <token> (no quotes, single space).
  • Handle 401 by refreshing the token (cache access_token + expires_in).
  • For Cloudflare/ALB in front, ensure they don’t strip Authorization.
  • If you need mTLS as well, add your keystore/truststore to the HTTP client config; the Bearer header stays the same.

If you tell me which client you’re using (Spring WebClient, RestTemplate, OkHttp, Apache, or pure Java 11) and how you obtain tokens (client-credentials vs user flow), I’ll tailor a tiny reusable “TokenProvider” + interceptor for you.

Kong – proxy issue

Got it—your client runs in AWS and must use a corporate proxy. With mTLS, a client-side proxy can absolutely be the culprit. Here’s the playbook.

What’s happening

  • If the proxy does TLS inspection (MITM), it terminates TLS and re-signs with its own CA. Your server asks the proxy (not the real client) for a cert → it has none → server logs “trying to obtain a certificate from the client.”
  • The client may also see “unable to get local issuer certificate” because it’s validating the proxy’s substituted cert but doesn’t trust the proxy’s Root CA.

Decide the path (pick one)

A) Allow end-to-end mTLS (best):
Ask the proxy admins to bypass SSL inspection for your domain (add it to the proxy’s TLS bypass list) or ensure it does pure CONNECT tunneling. Then the real client cert reaches your server.

B) Two-hop mTLS (enterprise pattern):

  • Client ↔ Proxy: mTLS using Client-Cert-#1 (issued by proxy’s CA).
  • Proxy ↔ Your Server: mTLS using Client-Cert-#2 (issued by a CA your server trusts).
    Your server will authenticate the proxy’s identity, not the original client. If you need end-user identity, have the proxy forward vetted identity (headers/JWT) and verify it.

C) Temporary test bypass:
On that AWS instance, set NO_PROXY for your hostname if policy allows, and open egress 443 to your server to confirm the issue is the proxy.

Quick diagnostics (run both)

1) See if proxy is intercepting

# Behind the proxy
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts </dev/null | \
  openssl x509 -noout -issuer -subject -enddate

Compare issuer/subject to the same command from a machine without the proxy.
If issuers differ (e.g., “Zscaler/BlueCoat/Corp Root CA”), it’s inspecting.

2) Prove mTLS works through a tunnel

# If proxy supports pure CONNECT (no inspection)
curl -v https://YOUR_HOST/ \
  --proxy http://PROXY_HOST:PROXY_PORT \
  --cert client.crt --key client.key \
  --cacert server_ca_chain.pem

  • If your proxy itself is HTTPS, add: --proxy-cacert PROXY_ROOT_CA.pem.

Java client notes (common in AWS)

# Client cert for mTLS to your server
-Djavax.net.ssl.keyStore=/path/client.p12 -Djavax.net.ssl.keyStorePassword=***** -Djavax.net.ssl.keyStoreType=PKCS12
# Trust your server (or proxy if inspecting)
-Djavax.net.ssl.trustStore=/path/truststore.jks -Djavax.net.ssl.trustStorePassword=*****
# Proxy settings (CONNECT)
-Dhttps.proxyHost=PROXY_HOST -Dhttps.proxyPort=PROXY_PORT
# Bypass (if allowed)
-Dhttp.nonProxyHosts="|YOUR_HOST|*.yourdomain.com"

If inspection can’t be bypassed, you must import the proxy’s Root CA into the truststore, and if mTLS is required by your server, implement the two-hop mTLS model above.

What to send to the proxy team

  • “Please bypass TLS inspection (or allow pure CONNECT) for YOUR_HOST:443.”
  • If not possible: “We’ll run two-hop mTLS. We’ll trust your proxy’s client CA for the proxy→server leg; you’ll trust ours for client→proxy.”

If you share the outputs of the two openssl/curl tests (redact hostnames as needed), I’ll tell you exactly whether it’s inspection vs tunneling and the minimal change to make it work.

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

unable to get local issuer certificate” error 0 proxy

Yes — if the client is behind a proxy, that can definitely cause the “unable to get local issuer certificate” error.

Here’s why:


1. Intercepting / SSL inspection proxy

  • Some proxies (corporate firewalls, Zscaler, BlueCoat, Palo Alto, etc.) terminate TLS, then re-sign the traffic with their own internal CA.
  • The client sees a cert issued by that proxy CA, not the real site.
  • If the client’s trust store doesn’t include that proxy’s root CA, validation fails → “unable to get local issuer certificate”.

2. Forward proxy (CONNECT tunnel)

  • If the proxy just forwards raw TLS (no inspection), usually no issue.
  • But if the proxy alters headers (Host/SNI) or drops part of the certificate chain, the server’s cert may not validate correctly.

3. Transparent proxy / middlebox

  • Some devices “peek” into TLS but don’t pass along the full chain → clients only see the leaf cert.
  • That also leads to the issuer validation error.

How to confirm if proxy is the problem

Run openssl s_client from behind the proxy and compare to running it directly (bypassing proxy):

openssl s_client -connect example.com:443 -servername example.com -showcerts

If the issuer/subject of the leaf cert changes (e.g. you see “Zscaler Root CA” instead of “Let’s Encrypt”), then the proxy is doing TLS interception.


Fixes if proxy is cause

  • Trusted root CA: import the proxy’s root/intermediate CA into the client trust store (system trust, Java cacerts, browser trust store, etc.).
  • Bypass proxy for domains where you need direct trust (if allowed by policy).
  • Update proxy config so it presents the full chain correctly.

👉 Do you want me to show you exactly how to check if your client’s proxy is re-signing the cert (step-by-step with openssl and comparing cert subjects)?

unable to get local issuer certificate

unable to get local issuer certificate” means the TLS verifier can’t build a valid trust chain from the server’s cert → a trusted root. In plain English: the server didn’t send the right intermediate(s), or it’s using a CA your client doesn’t trust (e.g., a private CA or a Cloudflare Origin Certificate seen directly).

Here’s how to fix it fast in a Cloudflare → F5 → Kong → Upstream stack.


1) Identify which hop is failing

Run these from your laptop (replace host):unable to get local issuer certificate

# Client → Cloudflare (public hostname)
openssl s_client -showcerts -servername api.example.com -connect api.example.com:443 </dev/null | sed -n '1,/-END CERTIFICATE-/p'

# Client → F5 (bypass Cloudflare: hit the VIP IP)
openssl s_client -showcerts -connect <F5_PUBLIC_IP>:443 </dev/null

# Direct to Kong node (if it terminates TLS)
openssl s_client -showcerts -connect <KONG_NODE_IP>:8443 </dev/null

# From inside Kong to the upstream (what Kong sees)
docker exec -it kong sh -lc 'apk add --no-cache openssl >/dev/null 2>&1 || true; \
  openssl s_client -showcerts -servername <UPSTREAM_HOST> -connect <UPSTREAM_HOST>:443 </dev/null'

Look at:

  • The presented chain (server cert + intermediates)
  • Verify return code: (should be 0 (ok))

2) Common root causes & specific fixes

A) Missing intermediate on your origin (F5/Kong/upstream)

Symptom: OpenSSL shows only the server cert, or depth=0 ok then fails at depth=1 with this error.

Fix:

  • F5: import the intermediate CA that issued your server cert and set it as the Chain in the Client SSL Profile bound to your VIP (Certificate + Key + Chain). Ensure the chain is complete (server → intermediate(s) → root).
  • Kong terminating TLS (TLS Ingress on 8443): configure cert + key using the full chain bundle (server + intermediates). Most proxies require fullchain.pem (not just cert.pem).
  • Upstream service: install a proper cert and serve the intermediates.

B) You used a Cloudflare Origin Certificate and connected directly to F5

Cloudflare Origin Certs are only trusted by Cloudflare, not by browsers/curl.
Symptom: Works when proxied through Cloudflare (orange cloud), fails when you hit the F5 IP directly.

Fix options:

  • Keep traffic proxied through Cloudflare (don’t bypass); or
  • Install a publicly trusted cert on F5 (e.g., Let’s Encrypt) for direct access.

C) Kong → Upstream uses an internal/private CA

Symptom: Client to Kong is fine; Kong logs show upstream TLS verify errors, or your route returns 500/502.

Fix (Kong Admin API):

  1. Upload your internal CA:
curl -s -X POST :8001/ca_certificates -F cert=@corp-root-or-intermediate.pem
# -> returns {"id":"<CA_ID>", ...}

  1. Attach it to the Service and enforce verification:
curl -s -X PATCH :8001/services/<service_id> \
  -d tls_verify=true \
  -d tls_verify_depth=2 \
  -d ca_certificates[]='<CA_ID>'

(Temporary only: you can set tls_verify=false to prove that CA trust is the issue—then turn it back on with the proper CA.)

D) Wrong SNI / Hostname mismatch

If the cert is for api.example.com but the TLS handshake SNI is different, validation fails.

  • Ensure F5 sends correct SNI when it re-encrypts to Kong/upstream.
  • In Kong Service for HTTPS upstreams, set sni=<upstream-cert-hostname>.

E) Old client trust store / corporate MITM

  • Older clients may lack newer roots (update trust store).
  • If a corporate proxy is intercepting TLS, clients need that proxy’s root in their trust store.

3) Quick “good config” checklist per hop

Cloudflare → F5

  • SSL mode: Full (strict).
  • (Optional) Authenticated Origin Pull: F5 must trust CF Origin Pull CA and require client cert.
  • F5 presents a cert chain your clients would trust when you test direct (unless you never hit it directly).

F5 → Kong

  • If re-encrypting: Server SSL profile with proper SNI to Kong (if Kong expects it). Otherwise send HTTP on a private VLAN to keep it simple.
  • Preserve headers (X-Forwarded-Proto, X-Forwarded-For).

Kong → Upstream

  • For HTTPS upstreams:
    • protocol=https, port=443, sni=<host>, tls_verify=true
    • Attach internal CA certificate if upstream uses a private CA (see step 2C).

4) Don’t “fix” by disabling verification (except to isolate)

  • curl -k, Node’s NODE_TLS_REJECT_UNAUTHORIZED=0, Java -Dcom.sun.net.ssl.checkRevocation=false—use only for a one-off test. The real fix is serving the right chain or installing the proper CA.

5) One-command validator you can paste in tickets

openssl s_client -connect api.example.com:443 -servername api.example.com -verify_return_error </dev/null

You’ll get a clear reason (missing issuer / self-signed / hostname mismatch).


If you tell me which endpoint shows the error (Cloudflare URL, direct F5 VIP, Kong 8443, or Kong→Upstream) and paste the first ~40 lines of the openssl s_client output, I’ll point to the exact certificate/chain file you need to fix.

KONG

Got it — here’s a clean, practical way to wire Client → Cloudflare → F5 → Kong → Upstream so your normal Kong flow
Client → [Kong Route matched] → [Service invoked] → Upstream API works end-to-end.


0) Topology (recommended)

Client ─TLS─> Cloudflare ─TLS─> F5 VIP ─HTTP─> Kong (8000) ─(HTTP/HTTPS)─> Upstream

  • Cloudflare terminates client TLS + WAF/bot/DDOS.
  • F5 is your “origin” for Cloudflare, then load balances across Kong nodes.
  • Kong routes/plugins/auth, then proxies to your upstream.

You can also do F5→Kong over HTTPS (8443). Start with HTTP (simpler), add re-encryption later.


1) Cloudflare (edge) setup

  1. DNS: api.example.com → orange-cloud (proxied) to your F5 public IP.
  2. SSL/TLS: set mode Full (strict).
  3. Authenticated Origin Pull (AOP):
    • Enable it in Cloudflare.
    • On F5, require the Cloudflare client cert (see F5 step) so only CF can hit your VIP.
  4. Origin server certificate (on F5): either
    • a normal public cert (LetsEncrypt, etc.), or
    • a Cloudflare Origin Certificate (valid only to CF; fine if you never bypass CF).
  5. API cache rules: bypass cache for /api/*, enable WebSockets if you use them.

2) F5 LTM (VIP to Kong) essentials

Virtual server (HTTPS on 443) → pool (Kong nodes on 8000)

  • Client SSL profile: present your cert (public or CF Origin Cert).
  • (Optional but recommended) verify Cloudflare client cert for AOP:
    import Cloudflare Origin Pull CA on F5 and set it as Trusted CA; require client cert.
  • HTTP profile: enable; Insert X-Forwarded-For; preserve headers.
  • OneConnect: enable for keep-alives to Kong.
  • Pool members: all Kong nodes, port 8000 (or 8443 if you re-encrypt).
  • Health monitor: HTTP GET to Kong status (see Kong step).

Header hygiene (iRule – optional, if you want to be explicit):

when HTTP_REQUEST {
  # Preserve real client IP from Cloudflare into X-Forwarded-For
  if { [HTTP::header exists "CF-Connecting-IP"] } {
    set cfip [HTTP::header value "CF-Connecting-IP"]
    if { [HTTP::header exists "X-Forwarded-For"] } {
      HTTP::header replace "X-Forwarded-For" "[HTTP::header value X-Forwarded-For], $cfip"
    } else {
      HTTP::header insert "X-Forwarded-For" $cfip
    }
  }
  HTTP::header replace "X-Forwarded-Proto" "https"
}

(Or just enable “X-Forwarded-For: append” in the HTTP profile and set XFP via policy.)


3) Kong Gateway settings (behind F5)

Environment (docker-compose/env vars):

KONG_PROXY_LISTEN=0.0.0.0:8000
# Trust only your F5 addresses/CIDRs (do NOT trust 0.0.0.0/0)
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,<F5_VIP_or_SNATs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on
# helpful during testing
KONG_HEADERS=latency_tokens
# health endpoint for F5 monitor (internal only)
KONG_STATUS_LISTEN=0.0.0.0:8100

F5 health monitor target (HTTP):

  • URL: http://<kong-node>:8100/status
  • Expect: 200 (Kong’s status port; safe to expose internally)

Define Service/Route (example):

# Service to your upstream (HTTP)
curl -sX POST :8001/services \
  -d name=orders-svc \
  -d host=tomcat-app \
  -d port=8080 \
  -d protocol=http

# Route that matches your API path/host
curl -sX POST :8001/routes \
  -d service.name=orders-svc \
  -d hosts[]=api.example.com \
  -d paths[]=/v1

(If upstream is HTTPS, set protocol=https, port=443, and sni=upstream.host.)


4) Putting it together (request path)

  1. Clienthttps://api.example.com/v1/...
  2. Cloudflare terminates TLS, adds CF-Connecting-IP, forwards to F5.
  3. F5 validates CF client cert (AOP), appends XFF, sets X-Forwarded-Proto:https, LB to a Kong node.
  4. Kong trusts F5 IP, extracts the real client IP from XFF, matches Route (Host=api.example.com, path=/v1), invokes Service, proxies to Upstream.
  5. Response flows back; Kong adds latency headers (if enabled); F5 returns to CF; CF returns to client.

5) Testing (in layers)

  • Direct to Kong (bypass CF/F5) on the private network: curl -i -H 'Host: api.example.com' http://<kong-node>:8000/v1/ping -H 'Kong-Debug: 1'
  • Through F5 VIP (simulate Cloudflare off): curl -i -H 'Host: api.example.com' https://<f5-public-ip>/v1/ping --resolve api.example.com:<f5-public-ip>:443
  • Through Cloudflare (real path): curl -i https://api.example.com/v1/ping -H 'Kong-Debug: 1'

Check headers:

  • X-Kong-Upstream-Latency (upstream time)
  • X-Kong-Proxy-Latency (Kong internal)
  • Logs in Kong show client_ip = real client IP (not F5/CF) if trusted IPs are correct.

6) Observability (recommended)

  • Kong plugins:
    • prometheus (metrics), correlation-id (X-Request-ID), http-log or file-log → ELK.
  • F5: request logging profile to ELK; LTM stats.
  • Cloudflare: Security events/Edge logs (if plan allows).

7) Common gotchas (and quick fixes)

  • Client IP shows F5/Cloudflare → set KONG_TRUSTED_IPS, REAL_IP_*; ensure F5 appends XFF and passes CF-Connecting-IP.
  • OIDC header lost → make sure F5 policy does not strip Authorization.
  • Wrong redirects / mixed content → ensure X-Forwarded-Proto: https reaches Kong & upstream.
  • “Server busy” / spikes → raise F5 OneConnect pool, ensure Kong backlog/somaxconn, keepalive to upstream.
  • Health checks hitting Admin API → use KONG_STATUS_LISTEN (8100) instead, not port 8001 (Admin).

If you want, I can:

  • generate a Kong docker-compose block with those envs,
  • give you an F5 monitor config snippet,
  • and add a sample OIDC plugin + rate-limit plugin to the route for a production-like baseline.

Cloudflare → F5 → Kong API stack

Here’s the clean mental model for a Cloudflare → F5 → Kong API stack, plus the key headers/TLS choices and the minimum config knobs so it “just works.”

1) The traffic flow (happy path)

Client ─(HTTPS/HTTP2/3)→ Cloudflare ─(TLS to origin)→ F5 VIP ─(HTTP/HTTPS)→ Kong cluster ─→ Upstream services

  1. Cloudflare terminates the client TLS (always, if the orange-cloud proxy is on).
    • Applies WAF, DDoS, bot rules, rate limits, geo rules, etc.
    • Forwards to your F5 VIP as the “origin”.
  2. F5 LTM receives Cloudflare’s request.
    • Usually terminates TLS again (re-encrypt to Kong or send plain HTTP on the inside).
    • Load-balances across Kong nodes (pool members in Zone A/B).
  3. Kong Gateway routes by Host/path to your backend (service), runs plugins (OIDC, rate-limit, etc.), and proxies to the upstream.

2) TLS choices (pick one per hop)

Cloudflare → F5 (origin TLS):

  • Set Cloudflare SSL mode to Full (strict).
  • Enable Authenticated Origin Pull so only Cloudflare can hit F5.
  • On F5, trust Cloudflare’s Origin Pull CA and require client cert.

F5 → Kong:

  • Simple: terminate on F5 and send HTTP to Kong on the private VLAN.
  • End-to-end TLS: client-SSL on F5, server-SSL from F5 to Kong (re-encrypt), SNI kong.internal (or node name).

Kong → Upstream:

  • Match your upstream: protocol=http|https, SNI if TLS, optionally mTLS to sensitive services.

3) Real client IP (do this or logs/limits will be wrong)

Cloudflare sets:

  • CF-Connecting-IP (client IP)
  • X-Forwarded-For (appends client IP)
  • X-Forwarded-Proto: https

F5 should preserve (not overwrite) X-Forwarded-For and pass X-Forwarded-Proto.

Kong must trust the proxy chain so it can compute the real client IP:

  • Set (env or kong.conf):
    • KONG_TRUSTED_IPS=<F5 private CIDRs or F5 VIPs> (don’t trust 0.0.0.0/0)
    • KONG_REAL_IP_HEADER=X-Forwarded-For
    • KONG_REAL_IP_RECURSIVE=on
  • Then client_ip in logs, rate-limit/correlation will be the actual user IP from Cloudflare.

If you prefer using Cloudflare’s header explicitly, you can have F5 copy CF-Connecting-IP into the leftmost position of X-Forwarded-For.

4) Load balancing & health checks (avoid double “mystery” failover)

  • Cloudflare (optional LB): usually point it at a single F5 VIP per region and let F5 do node health.
  • F5 → Kong nodes: HTTP health monitor (e.g., GET /status/health on each Kong).
  • Kong → upstreams: use Kong Upstreams/Targets with active + passive health checks to eject bad app pods.

Pick one layer to be the source of truth per hop (Cloudflare LB or F5, Kong or upstream LB) to avoid contradictory decisions.

5) Protocols & connections

  • HTTP versions: client can be HTTP/2 or HTTP/3 to Cloudflare. Cloudflare→F5 is HTTP/1.1 or HTTP/2 (CF may downgrade). F5→Kong is typically HTTP/1.1.
  • Keep-alive: enable OneConnect on F5 and keep-alive to Kong to avoid connection churn.
  • WebSockets/gRPC: supported end-to-end; ensure Upgrade/HTTP2 is enabled through F5 and Kong Routes/Services.

6) Minimal config snippets

F5 (HTTP profile / header handling):

  • Enable “Insert X-Forwarded-For” (or an iRule to append not overwrite).
  • Preserve X-Forwarded-Proto = https.
  • If using Authenticated Origin Pull: client-SSL requires CF client cert; trust CF Origin CA.

Kong (env):

KONG_HEADERS=latency_tokens
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16, <your F5 VIPs/CIDRs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on

Kong Service (HTTPS upstream with SNI):

protocol: https
host: api.internal.example
port: 443
tls_verify: true
sni: api.internal.example

Kong rate limiting behind proxies: use consumer or ip policy; with trusted IPs set, ip uses the real client.

7) Observability (what to turn on)

  • Cloudflare: (if plan allows) Edge logs / Security events (attack/waf/bot).
  • F5: LTM logs + request logging profiles; export to ELK.
  • Kong: enable Prometheus plugin, correlation-id, and http-log/file-log to ELK.
  • Make sure your ELK/Loki sees client_ip, X-Request-ID, service, route, upstream_status, latencies.{kong,proxy,request}.

8) Common pitfalls & quick fixes

  • Client IP shows F5/Cloudflare: set KONG_TRUSTED_IPS and KONG_REAL_IP_* as above; ensure F5 appends—not overwrites—X-Forwarded-For.
  • OIDC 401s behind CF/F5: your LB stripped Authorization. Validate with an echo upstream; ensure F5 leaves Authorization intact.
  • Mixed-content / wrong redirects: preserve X-Forwarded-Proto: https through to Kong and upstream so apps build HTTPS URLs.
  • Header too large (Set-Cookie): raise Kong Nginx proxy buffers if you see upstream sent too big header.
  • Double LB flaps: don’t health-check the same hop in both Cloudflare and F5 with different intervals/paths.

9) Example end-to-end (typical, secure)

  1. Cloudflare Full (strict) + Authenticated Origin Pull → F5 public VIP api.example.com:443.
  2. F5 terminates TLS, adds XFF if missing, passes X-Forwarded-Proto: https, LB to kongA:8000, kongB:8000.
  3. Kong trusts F5 IPs, pulls real client IP from XFF, runs OIDC/rate-limit, sends to upstream (HTTPS with SNI).
  4. Logs/metrics from all 3 land in ELK/Grafana.

If you want, I can drop a ready-to-run Kong docker-compose with those envs set, plus example F5 monitor path and a Kong Service/Route pointing to your php/tomcat containers.

ClientMTLSAuth status absent

It means no client certificate was presented/validated on the TLS handshake—i.e., client mTLS auth did not happen for that request.

Think of it like Nginx’s $ssl_client_verify = NONE: the server (Cloudflare/F5/Kong) didn’t see a client cert, so mTLS status is absent.

Why it happens

  • Client didn’t send a cert (missing --cert/--key in curl, or browser/keystore not configured).
  • mTLS not required on that hop (e.g., F5/Cloudflare set to “request/ignore” instead of “require”; Kong’s mtls-auth plugin not enabled).
  • TLS terminates before Kong (Cloudflare/F5 terminates TLS; the origin can’t see the client cert unless that proxy enforces mTLS itself).
  • Wrong host/port/SNI that doesn’t have the mTLS policy.

Quick checks

# PEM pair
curl -vk https://api.example.com/path \
  --cert client.crt --key client.key

# or PKCS#12
curl -vk https://api.example.com/path \
  --cert-type P12 --cert client.p12:password

Look for SSL client certificate lines in the verbose output.

  • On Cloudflare/F5:
    • Require client certs on the relevant hostname/path.
    • Ensure the CA used to validate client certs is uploaded/selected.
    • If using Cloudflare, remember: the client’s cert terminates at Cloudflare; the origin (Kong) won’t receive it. Enforce mTLS at Cloudflare edge and use origin controls (e.g., Authenticated Origin Pulls) separately.
  • On Kong:
    • If using Enterprise mtls-auth plugin, enable it on the route/service and upload your CA certificate to /ca-certificates; reference it in the plugin.
    • If relying on Nginx-level mTLS, ensure the proxy listener is configured with ssl_client_certificate and ssl_verify_client on; (or the equivalent in your deployment).

Interpreting statuses

  • absent = no cert presented.
  • failed/invalid = cert presented but couldn’t be validated (untrusted/expired/CN mismatch).
  • success/verified = cert was validated.

If you share how traffic flows (Client → Cloudflare/F5 → Kong) and where you intend to enforce mTLS, I’ll give you the exact config and a working curl for that hop.

kong 500

Good question 👍 — let’s break down upstream_status=500 in Kong.


🔹 What it means

  • 500 = Internal Server Error returned by your upstream service, not Kong.
  • Kong successfully forwarded the request to the backend, got a real HTTP response, and passed that status back.
  • So unlike 502/503/504 (which usually mean Kong couldn’t reach the upstream), a 500 means Kong did reach it, but the app/server itself failed internally.

🔹 Common causes

  • Unhandled exception in the app (backend crashed while processing request).
  • Bad input from client → app logic throws error instead of handling gracefully.
  • Dependency failure in upstream (e.g., DB connection error, null pointer, etc.).
  • Misrouted traffic (Kong points to wrong host/port → some service returns 500).
  • Authentication mismatch (upstream expected something different, failed internally).

🔹 How to verify it’s the upstream

  1. Check Kong’s access log: status=500 upstream_status=500 → Both match = the error truly comes from upstream. status=401 upstream_status=200 → Kong overrode the upstream result (plugin decision).
  2. If you’re unsure, temporarily point the route directly to a test echo server (httpbin or mockbin) and re-run → if 500 disappears, your real upstream is the culprit.

🔹 Troubleshooting steps

  • Direct test upstream (bypass Kong): curl -i http://<upstream_host>:<port>/<path> If you still get 500 → it’s an upstream bug.
  • Enable upstream error logging in Kong (in debug mode or use a logging plugin → Splunk/ELK).
  • Check app logs on the backend service for stack traces.
  • If you see status=502 upstream_status=- → that’s different (connection failed, no response).

✅ So, in short:
upstream_status=500 means your app/service returned a 500 → Kong didn’t block it, just proxied it.


Do you want me to also list the difference between 500 vs 502 vs 503 vs 504 in Kong upstream_status, so you can quickly tell where the problem is (Kong vs upstream)?

Kong latency 2

Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.