API – response time

Here are fast, reliable ways to measure client-side API response time (and break it down) — from your laptop or from an EKS pod.

1) One-shot timing (curl)

This prints DNS, TCP, TLS, TTFB, and Total in one go:

curl -s -o /dev/null -w '
{ "http_code":%{http_code},
  "remote_ip":"%{remote_ip}",
  "dns":%{time_namelookup},
  "tcp":%{time_connect},
  "tls":%{time_appconnect},
  "ttfb":%{time_starttransfer},
  "total":%{time_total},
  "size":%{size_download},
  "speed":%{speed_download}
}
' https://api.example.com/path

Fields

  • dns: DNS lookup
  • tcp: TCP connect
  • tls: TLS handshake (0 if HTTP)
  • ttfb: time to first byte (request→first response byte)
  • total: full download time

2) From EKS (ephemeral pod)

Run N samples and capture a CSV:

kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
sh -c 'for i in $(seq 1 50); do \
  curl -s -o /dev/null -w "%{time_namelookup},%{time_connect},%{time_appconnect},%{time_starttransfer},%{time_total}\n" \
  https://api.example.com/health; \
done' > timings.csv

Open timings.csv and look at columns: dns,tcp,tls,ttfb,total. Large ttfb means slow upstream/app; big tls means handshake issues; big gap total - ttfb means payload/download time.

3) Separate proxy vs upstream (Kong in the path)

Kong adds latency headers you can read on the client:

curl -i https://api.example.com/path | sed -n 's/^\(x-kong-.*latency\): \(.*\)$/\1: \2/p'
# x-kong-proxy-latency: <ms>   (Kong → upstream start)
# x-kong-upstream-latency: <ms> (Upstream processing)

These help you see if delay is at the gateway or in the service.

4) Quick load/percentiles (pick one)

  • hey hey -z 30s -c 20 https://api.example.com/path
  • vegeta echo "GET https://api.example.com/path" | vegeta attack -rate=20 -duration=30s | vegeta report
  • k6 (scriptable) // save as test.js import http from 'k6/http'; import { check } from 'k6'; export const options = { vus: 20, duration: '30s', thresholds: { http_req_duration: ['p(95)<300'] } }; export default () => { const r = http.get('https://api.example.com/path'); check(r, { '200': (res)=>res.status===200 }); }; Run: k6 run test.js

5) App-level timers (optional)

Add a Server-Timing header from the API to expose your own phase timings (DB, cache, etc.). Then the client can read those headers to correlate.

6) Common gotchas

  • Proxies can add latency; test both with and without proxy (NO_PROXY / --proxy).
  • Auth: measure with real headers/tokens; 401/403 will skew.
  • SNI/Host: if hitting by IP, use --resolve host:443:IP -H "Host: host" so cert/routing is correct.
  • Warmup: discard first few samples (JIT, caches, TLS session reuse).

If you want, share a few curl -w outputs from local vs EKS and I’ll pinpoint where the time is going (DNS/TLS/TTFB/payload).

tcpdummp

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

403

Short version: the cipher suite isn’t why you’re getting 403. A 403 is an authorization/ACL/WAF decision at L7. The TLS differences you saw (TLS 1.3 locally vs TLS 1.2 on EKS) are fine and expected depending on your load balancer policy.

What to do first (to find the real 403 source)

  1. Hit the EKS URL with verbose curl and look at headers:
curl -i -v https://your.domain/path

  • If the response headers look AWS-y and you see it immediately, the 403 likely comes from ALB/WAF.
  • If it reaches your pod/ingress and then 403s, it’s from Ingress/Kong/app (e.g., missing Authorization, IP allowlist, route rule, JWT/OIDC plugin, etc.).
  1. Check ALB access logs for the request:
  • If elb_status_code = 403, ALB/WAF blocked it (WAF rules often show up as 403).
  • If elb_status_code = 200 and target_status_code = 403, your target/app returned 403. (AWS Documentation)

Recommended TLS settings (good practice, not the 403 fix)

If you terminate TLS on an AWS ALB (via AWS Load Balancer Controller)

Use a modern security policy that enables TLS 1.3 and falls back to TLS 1.2:

metadata:
  annotations:
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06

That policy offers TLS 1.3 and 1.2 with strong ciphers. (If you truly want TLS 1.3 only: ELBSecurityPolicy-TLS13-1-3-2021-06.) (AWS Documentation, Kubernetes SIGs)

Tip: If you need maximum client compatibility, attach both an RSA and an ECDSA ACM cert (ALB supports multiple certs):
alb.ingress.kubernetes.io/certificate-arn: arn:...rsa,arn:...ecdsa. (Kubernetes SIGs)

If you terminate TLS on an NLB (TLS listener)

Pick a TLS 1.3 policy on the listener; NLB supports it and will use appropriate backend defaults. (AWS Documentation)

If you terminate TLS on ingress-nginx inside the cluster

Use TLS 1.2 + 1.3 and a modern cipher list:

# ConfigMap for ingress-nginx controller
data:
  ssl-protocols: "TLSv1.2 TLSv1.3"
  ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305"
  ssl-prefer-server-ciphers: "off"

(Those are the ingress-nginx docs’ modern defaults; TLS 1.3 ciphers are implicit.) (kubernetes.github.io)


Why your TLS info looks different

  • Local: likely direct to your app or local reverse proxy → negotiates TLS 1.3 (TLS_AES_128_GCM_SHA256 is one of the three TLS 1.3 ciphers).
  • EKS: you’re probably terminating at ALB (or an older policy) → negotiated TLS 1.2 (TLS_ECDHE_ECDSA_WITH_AES_128_…). That’s still strong if you use a modern ALB policy. (AWS Documentation)

Quick 403 triage checklist

  • AWS WAF attached? Check WAF logs for the rule ID; 403s here are common and configurable. (Repost, Amazon Web Services, Inc.)
  • Auth headers missing? Ingress/controller or a proxy hop might be stripping/renaming Authorization. Confirm with curl -v -H "Authorization: Bearer …".
  • Host/Path mismatch? Ingress host: and path rules must match exactly; wrong Host yields 4xx at ingress.
  • IP allowlist / Geo rules? Compare client IP vs policy.
  • Kong/OIDC/JWT plugin? Kong often 403s on invalid/absent token or unauthorized consumer; check Kong logs for the route and plugin decision.
  • CORS preflight? Browsers can show 403 on failed OPTIONS—ensure ingress routes OPTIONS.

If you share:

  • one ALB access log line for a blocked request, and/or
  • the response headers from curl -i -v on EKS,

I’ll tell you precisely whether the 403 is from ALB/WAF or from your Ingress/Kong/app—and the exact knob to turn.


: locally you can call the on-prem API just fine, but from EKS you get 403.
Handshake is OK (so network + TLS basics are fine). This is almost never a cipher issue—403 is an L7 decision (WAF/edge/Kong/app/proxy policy).

Here’s a tight plan to pinpoint and fix it.

1) Identify who is returning the 403 (one command from EKS)

Run from an EKS pod (ephemeral curl pod shown below) and inspect headers:

kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
  sh -c 'curl -i -v https://your.onprem.host/path -H "Authorization: Bearer $TOKEN" --max-time 20'

Look for these telltales in the 403 response:

  • Cloudflare: server: cloudflare, cf-ray, cf-cache-status
  • F5: server: BigIP or ASM headers like X-WA-Info
  • Kong: server: kong, x-kong-*
  • Proxy (corp/Zscaler/etc.): Via, X-BlueCoat-Via, custom proxy headers
  • Your app/ingress: app-specific headers, or none

If your pod must use an egress proxy, also try explicitly through it:

kubectl run curlp --rm -it --image=curlimages/curl:8.8.0 -- \
 sh -c 'curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT --max-time 20'

2) Fix by source

A) WAF/Edge (Cloudflare/F5) is denying EKS

  • Likely IP allowlist/geo/ASN or bot rule.
    Fix: Allowlist your EKS NAT/Egress IP(s) or relax/adjust the specific WAF rule; confirm in WAF/Firewall logs.

B) Corporate egress proxy is denying

  • From EKS, traffic goes via a corp proxy that enforces access policy or strips Authorization.
    Fix:
    • Whitelist the destination domain on the proxy, or
    • Set NO_PROXY=your.onprem.host on the workload (if policy allows), or
    • Ensure the proxy allows the path and forwards all headers (esp. Authorization).
    • If the proxy does TLS inspection + your server expects client mTLS, either bypass inspection for this domain or use a two-hop mTLS model.

C) Kong/Ingress/app is denying

  • Common causes: missing/invalid token, ACL/IP allowlist plugin, wrong Host header/SNI, or CORS preflight.
    Fix:
    • Verify the request from EKS includes the same auth headers as local (env var or secret may be missing).
    • If calling by IP, set the correct Host header:
      curl -H "Host: your.onprem.host" --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/...
    • Check Kong logs for route/plugin decision (look for consumer, route, service, plugin names).

3) Quick differentials to run from EKS

  • Compare with and without proxy env:
# Without proxy (if allowed)
curl -i -v https://your.onprem.host/path

# Via proxy
curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT

  • Preserve Host/SNI if you must target an IP:
curl -i -v --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/path

  • Include your real auth:
curl -i -v -H "Authorization: Bearer $TOKEN" https://your.onprem.host/path

  • See who really answered:
    Check Server, Via, cf-ray, x-kong*, X-Envoy-*, etc., in the 403 response.

4) About ciphers/TLS

Use modern policies that allow TLS 1.3 + 1.2 with ECDHE + AES-GCM/CHACHA. The exact cipher (e.g., TLSv1.3 TLS_AES_128_GCM_SHA256 locally vs TLSv1.2 ECDHE_*_AES_128_GCM_SHA256 from EKS) will not cause a 403. It’s fine to keep both TLS 1.2 and 1.3 enabled; the fix lies in the access/authorization layer, not the cipher.


If you paste the 403 response headers from the EKS curl (no body needed), I can tell you in one shot whether it’s WAF/edge, proxy, Kong, or your app, and give the exact knob to turn (allowlist, header forward, plugin, or bypass).

cert error

That error means the client sees a cert chain that includes a CA it doesn’t trust—most commonly a corporate proxy’s self-signed root (TLS inspection) or a misconfigured server chain. Here’s the fastest way to fix it given your AWS client + proxy setup.

0) Quick proof (run behind the proxy)

# Shows who signed what the client actually sees
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts -verify_return_error </dev/null \
| openssl x509 -noout -subject -issuer -enddate

  • If the issuer is your company/Zscaler/BlueCoat/etc → it’s proxy interception.
  • If the issuer is a normal public CA but the chain still fails → your server/edge isn’t sending the proper intermediates.

how to integrate token in java call

Gotcha—here are quick, copy-paste ways to add an OIDC/JWT Bearer token to Java HTTP calls. Pick the stack you use.


Java 11+ HttpClient

var client = java.net.http.HttpClient.newHttpClient();
String token = "eyJ..."; // put your access token here

var req = java.net.http.HttpRequest.newBuilder()
    .uri(URI.create("https://api.example.com/resource"))
    .header("Authorization", "Bearer " + token)
    .header("Content-Type", "application/json")
    .POST(java.net.http.HttpRequest.BodyPublishers.ofString("{\"ping\":\"pong\"}"))
    .build();

var res = client.send(req, java.net.http.HttpResponse.BodyHandlers.ofString());
System.out.println(res.statusCode() + " " + res.body());


OkHttp (popular + simple)

OkHttpClient client = new OkHttpClient.Builder()
    .addInterceptor(chain -> {
        String token = "eyJ..."; // inject per request or from a provider
        Request req = chain.request().newBuilder()
            .addHeader("Authorization", "Bearer " + token)
            .build();
        return chain.proceed(req);
    })
    .build();

Request request = new Request.Builder()
    .url("https://api.example.com/resource")
    .post(RequestBody.create("{\"ping\":\"pong\"}", MediaType.get("application/json")))
    .build();

try (Response resp = client.newCall(request).execute()) {
    System.out.println(resp.code());
}


Apache HttpClient 5

var httpClient = HttpClients.custom()
    .addRequestInterceptorFirst((HttpRequest request, EntityDetails entity, HttpContext ctx) -> {
        request.addHeader("Authorization", "Bearer " + "eyJ...");
    })
    .build();

var post = new HttpPost("https://api.example.com/resource");
post.setHeader(HttpHeaders.CONTENT_TYPE, "application/json");
post.setEntity(new StringEntity("{\"ping\":\"pong\"}", StandardCharsets.UTF_8));

try (var resp = httpClient.execute(post)) {
    System.out.println(resp.getCode());
}


Spring (WebClient) — preferred in Spring Boot

@Bean
WebClient webClient() {
  return WebClient.builder()
      .filter((req, next) -> {
        String token = "eyJ..."; // inject from a bean that caches/refreshes
        ClientRequest r = ClientRequest.from(req)
            .header(HttpHeaders.AUTHORIZATION, "Bearer " + token).build();
        return next.exchange(r);
      })
      .build();
}

// use it
webClient().post().uri("https://api.example.com/resource")
  .contentType(MediaType.APPLICATION_JSON)
  .bodyValue(Map.of("ping","pong"))
  .retrieve().toEntity(String.class).block();


Spring (RestTemplate)

RestTemplate rt = new RestTemplate();
rt.getInterceptors().add((req, body, ex) -> {
  req.getHeaders().setBearerAuth("eyJ...");
  return ex.execute(req, body);
});
ResponseEntity<String> resp = rt.getForEntity("https://api.example.com/resource", String.class);


Feign (OpenFeign)

@Bean
public RequestInterceptor bearerAuth() {
  return template -> template.header("Authorization", "Bearer " + "eyJ...");
}


JAX-WS / SOAP (header example)

SOAP 1.1 often also needs SOAPAction, but the Bearer goes in HTTP headers:

BindingProvider bp = (BindingProvider) port;
Map<String, List<String>> headers = new HashMap<>();
headers.put("Authorization", List.of("Bearer eyJ..."));
bp.getRequestContext().put(MessageContext.HTTP_REQUEST_HEADERS, headers);


Getting the token (Ping/OIDC) in Java (client-credentials)

var client = HttpClient.newHttpClient();
var form = URLEncoder.encode("grant_type","UTF-8") + "=client_credentials" +
           "&" + URLEncoder.encode("client_id","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_ID"),"UTF-8") +
           "&" + URLEncoder.encode("client_secret","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_SECRET"),"UTF-8");

var req = HttpRequest.newBuilder(URI.create("https://idp.example.com/oauth2/token"))
    .header("Content-Type", "application/x-www-form-urlencoded")
    .POST(HttpRequest.BodyPublishers.ofString(form))
    .build();

var res = client.send(req, HttpResponse.BodyHandlers.ofString());
String token = new org.json.JSONObject(res.body()).getString("access_token");


Pro tips (Kong/Ping friendly)

  • Always send Authorization: Bearer <token> (no quotes, single space).
  • Handle 401 by refreshing the token (cache access_token + expires_in).
  • For Cloudflare/ALB in front, ensure they don’t strip Authorization.
  • If you need mTLS as well, add your keystore/truststore to the HTTP client config; the Bearer header stays the same.

If you tell me which client you’re using (Spring WebClient, RestTemplate, OkHttp, Apache, or pure Java 11) and how you obtain tokens (client-credentials vs user flow), I’ll tailor a tiny reusable “TokenProvider” + interceptor for you.

Kong – proxy issue

Got it—your client runs in AWS and must use a corporate proxy. With mTLS, a client-side proxy can absolutely be the culprit. Here’s the playbook.

What’s happening

  • If the proxy does TLS inspection (MITM), it terminates TLS and re-signs with its own CA. Your server asks the proxy (not the real client) for a cert → it has none → server logs “trying to obtain a certificate from the client.”
  • The client may also see “unable to get local issuer certificate” because it’s validating the proxy’s substituted cert but doesn’t trust the proxy’s Root CA.

Decide the path (pick one)

A) Allow end-to-end mTLS (best):
Ask the proxy admins to bypass SSL inspection for your domain (add it to the proxy’s TLS bypass list) or ensure it does pure CONNECT tunneling. Then the real client cert reaches your server.

B) Two-hop mTLS (enterprise pattern):

  • Client ↔ Proxy: mTLS using Client-Cert-#1 (issued by proxy’s CA).
  • Proxy ↔ Your Server: mTLS using Client-Cert-#2 (issued by a CA your server trusts).
    Your server will authenticate the proxy’s identity, not the original client. If you need end-user identity, have the proxy forward vetted identity (headers/JWT) and verify it.

C) Temporary test bypass:
On that AWS instance, set NO_PROXY for your hostname if policy allows, and open egress 443 to your server to confirm the issue is the proxy.

Quick diagnostics (run both)

1) See if proxy is intercepting

# Behind the proxy
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts </dev/null | \
  openssl x509 -noout -issuer -subject -enddate

Compare issuer/subject to the same command from a machine without the proxy.
If issuers differ (e.g., “Zscaler/BlueCoat/Corp Root CA”), it’s inspecting.

2) Prove mTLS works through a tunnel

# If proxy supports pure CONNECT (no inspection)
curl -v https://YOUR_HOST/ \
  --proxy http://PROXY_HOST:PROXY_PORT \
  --cert client.crt --key client.key \
  --cacert server_ca_chain.pem

  • If your proxy itself is HTTPS, add: --proxy-cacert PROXY_ROOT_CA.pem.

Java client notes (common in AWS)

# Client cert for mTLS to your server
-Djavax.net.ssl.keyStore=/path/client.p12 -Djavax.net.ssl.keyStorePassword=***** -Djavax.net.ssl.keyStoreType=PKCS12
# Trust your server (or proxy if inspecting)
-Djavax.net.ssl.trustStore=/path/truststore.jks -Djavax.net.ssl.trustStorePassword=*****
# Proxy settings (CONNECT)
-Dhttps.proxyHost=PROXY_HOST -Dhttps.proxyPort=PROXY_PORT
# Bypass (if allowed)
-Dhttp.nonProxyHosts="|YOUR_HOST|*.yourdomain.com"

If inspection can’t be bypassed, you must import the proxy’s Root CA into the truststore, and if mTLS is required by your server, implement the two-hop mTLS model above.

What to send to the proxy team

  • “Please bypass TLS inspection (or allow pure CONNECT) for YOUR_HOST:443.”
  • If not possible: “We’ll run two-hop mTLS. We’ll trust your proxy’s client CA for the proxy→server leg; you’ll trust ours for client→proxy.”

If you share the outputs of the two openssl/curl tests (redact hostnames as needed), I’ll tell you exactly whether it’s inspection vs tunneling and the minimal change to make it work.

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

unable to get local issuer certificate” error 0 proxy

Yes — if the client is behind a proxy, that can definitely cause the “unable to get local issuer certificate” error.

Here’s why:


1. Intercepting / SSL inspection proxy

  • Some proxies (corporate firewalls, Zscaler, BlueCoat, Palo Alto, etc.) terminate TLS, then re-sign the traffic with their own internal CA.
  • The client sees a cert issued by that proxy CA, not the real site.
  • If the client’s trust store doesn’t include that proxy’s root CA, validation fails → “unable to get local issuer certificate”.

2. Forward proxy (CONNECT tunnel)

  • If the proxy just forwards raw TLS (no inspection), usually no issue.
  • But if the proxy alters headers (Host/SNI) or drops part of the certificate chain, the server’s cert may not validate correctly.

3. Transparent proxy / middlebox

  • Some devices “peek” into TLS but don’t pass along the full chain → clients only see the leaf cert.
  • That also leads to the issuer validation error.

How to confirm if proxy is the problem

Run openssl s_client from behind the proxy and compare to running it directly (bypassing proxy):

openssl s_client -connect example.com:443 -servername example.com -showcerts

If the issuer/subject of the leaf cert changes (e.g. you see “Zscaler Root CA” instead of “Let’s Encrypt”), then the proxy is doing TLS interception.


Fixes if proxy is cause

  • Trusted root CA: import the proxy’s root/intermediate CA into the client trust store (system trust, Java cacerts, browser trust store, etc.).
  • Bypass proxy for domains where you need direct trust (if allowed by policy).
  • Update proxy config so it presents the full chain correctly.

👉 Do you want me to show you exactly how to check if your client’s proxy is re-signing the cert (step-by-step with openssl and comparing cert subjects)?

unable to get local issuer certificate

unable to get local issuer certificate” means the TLS verifier can’t build a valid trust chain from the server’s cert → a trusted root. In plain English: the server didn’t send the right intermediate(s), or it’s using a CA your client doesn’t trust (e.g., a private CA or a Cloudflare Origin Certificate seen directly).

Here’s how to fix it fast in a Cloudflare → F5 → Kong → Upstream stack.


1) Identify which hop is failing

Run these from your laptop (replace host):unable to get local issuer certificate

# Client → Cloudflare (public hostname)
openssl s_client -showcerts -servername api.example.com -connect api.example.com:443 </dev/null | sed -n '1,/-END CERTIFICATE-/p'

# Client → F5 (bypass Cloudflare: hit the VIP IP)
openssl s_client -showcerts -connect <F5_PUBLIC_IP>:443 </dev/null

# Direct to Kong node (if it terminates TLS)
openssl s_client -showcerts -connect <KONG_NODE_IP>:8443 </dev/null

# From inside Kong to the upstream (what Kong sees)
docker exec -it kong sh -lc 'apk add --no-cache openssl >/dev/null 2>&1 || true; \
  openssl s_client -showcerts -servername <UPSTREAM_HOST> -connect <UPSTREAM_HOST>:443 </dev/null'

Look at:

  • The presented chain (server cert + intermediates)
  • Verify return code: (should be 0 (ok))

2) Common root causes & specific fixes

A) Missing intermediate on your origin (F5/Kong/upstream)

Symptom: OpenSSL shows only the server cert, or depth=0 ok then fails at depth=1 with this error.

Fix:

  • F5: import the intermediate CA that issued your server cert and set it as the Chain in the Client SSL Profile bound to your VIP (Certificate + Key + Chain). Ensure the chain is complete (server → intermediate(s) → root).
  • Kong terminating TLS (TLS Ingress on 8443): configure cert + key using the full chain bundle (server + intermediates). Most proxies require fullchain.pem (not just cert.pem).
  • Upstream service: install a proper cert and serve the intermediates.

B) You used a Cloudflare Origin Certificate and connected directly to F5

Cloudflare Origin Certs are only trusted by Cloudflare, not by browsers/curl.
Symptom: Works when proxied through Cloudflare (orange cloud), fails when you hit the F5 IP directly.

Fix options:

  • Keep traffic proxied through Cloudflare (don’t bypass); or
  • Install a publicly trusted cert on F5 (e.g., Let’s Encrypt) for direct access.

C) Kong → Upstream uses an internal/private CA

Symptom: Client to Kong is fine; Kong logs show upstream TLS verify errors, or your route returns 500/502.

Fix (Kong Admin API):

  1. Upload your internal CA:
curl -s -X POST :8001/ca_certificates -F cert=@corp-root-or-intermediate.pem
# -> returns {"id":"<CA_ID>", ...}

  1. Attach it to the Service and enforce verification:
curl -s -X PATCH :8001/services/<service_id> \
  -d tls_verify=true \
  -d tls_verify_depth=2 \
  -d ca_certificates[]='<CA_ID>'

(Temporary only: you can set tls_verify=false to prove that CA trust is the issue—then turn it back on with the proper CA.)

D) Wrong SNI / Hostname mismatch

If the cert is for api.example.com but the TLS handshake SNI is different, validation fails.

  • Ensure F5 sends correct SNI when it re-encrypts to Kong/upstream.
  • In Kong Service for HTTPS upstreams, set sni=<upstream-cert-hostname>.

E) Old client trust store / corporate MITM

  • Older clients may lack newer roots (update trust store).
  • If a corporate proxy is intercepting TLS, clients need that proxy’s root in their trust store.

3) Quick “good config” checklist per hop

Cloudflare → F5

  • SSL mode: Full (strict).
  • (Optional) Authenticated Origin Pull: F5 must trust CF Origin Pull CA and require client cert.
  • F5 presents a cert chain your clients would trust when you test direct (unless you never hit it directly).

F5 → Kong

  • If re-encrypting: Server SSL profile with proper SNI to Kong (if Kong expects it). Otherwise send HTTP on a private VLAN to keep it simple.
  • Preserve headers (X-Forwarded-Proto, X-Forwarded-For).

Kong → Upstream

  • For HTTPS upstreams:
    • protocol=https, port=443, sni=<host>, tls_verify=true
    • Attach internal CA certificate if upstream uses a private CA (see step 2C).

4) Don’t “fix” by disabling verification (except to isolate)

  • curl -k, Node’s NODE_TLS_REJECT_UNAUTHORIZED=0, Java -Dcom.sun.net.ssl.checkRevocation=false—use only for a one-off test. The real fix is serving the right chain or installing the proper CA.

5) One-command validator you can paste in tickets

openssl s_client -connect api.example.com:443 -servername api.example.com -verify_return_error </dev/null

You’ll get a clear reason (missing issuer / self-signed / hostname mismatch).


If you tell me which endpoint shows the error (Cloudflare URL, direct F5 VIP, Kong 8443, or Kong→Upstream) and paste the first ~40 lines of the openssl s_client output, I’ll point to the exact certificate/chain file you need to fix.

KONG

Got it — here’s a clean, practical way to wire Client → Cloudflare → F5 → Kong → Upstream so your normal Kong flow
Client → [Kong Route matched] → [Service invoked] → Upstream API works end-to-end.


0) Topology (recommended)

Client ─TLS─> Cloudflare ─TLS─> F5 VIP ─HTTP─> Kong (8000) ─(HTTP/HTTPS)─> Upstream

  • Cloudflare terminates client TLS + WAF/bot/DDOS.
  • F5 is your “origin” for Cloudflare, then load balances across Kong nodes.
  • Kong routes/plugins/auth, then proxies to your upstream.

You can also do F5→Kong over HTTPS (8443). Start with HTTP (simpler), add re-encryption later.


1) Cloudflare (edge) setup

  1. DNS: api.example.com → orange-cloud (proxied) to your F5 public IP.
  2. SSL/TLS: set mode Full (strict).
  3. Authenticated Origin Pull (AOP):
    • Enable it in Cloudflare.
    • On F5, require the Cloudflare client cert (see F5 step) so only CF can hit your VIP.
  4. Origin server certificate (on F5): either
    • a normal public cert (LetsEncrypt, etc.), or
    • a Cloudflare Origin Certificate (valid only to CF; fine if you never bypass CF).
  5. API cache rules: bypass cache for /api/*, enable WebSockets if you use them.

2) F5 LTM (VIP to Kong) essentials

Virtual server (HTTPS on 443) → pool (Kong nodes on 8000)

  • Client SSL profile: present your cert (public or CF Origin Cert).
  • (Optional but recommended) verify Cloudflare client cert for AOP:
    import Cloudflare Origin Pull CA on F5 and set it as Trusted CA; require client cert.
  • HTTP profile: enable; Insert X-Forwarded-For; preserve headers.
  • OneConnect: enable for keep-alives to Kong.
  • Pool members: all Kong nodes, port 8000 (or 8443 if you re-encrypt).
  • Health monitor: HTTP GET to Kong status (see Kong step).

Header hygiene (iRule – optional, if you want to be explicit):

when HTTP_REQUEST {
  # Preserve real client IP from Cloudflare into X-Forwarded-For
  if { [HTTP::header exists "CF-Connecting-IP"] } {
    set cfip [HTTP::header value "CF-Connecting-IP"]
    if { [HTTP::header exists "X-Forwarded-For"] } {
      HTTP::header replace "X-Forwarded-For" "[HTTP::header value X-Forwarded-For], $cfip"
    } else {
      HTTP::header insert "X-Forwarded-For" $cfip
    }
  }
  HTTP::header replace "X-Forwarded-Proto" "https"
}

(Or just enable “X-Forwarded-For: append” in the HTTP profile and set XFP via policy.)


3) Kong Gateway settings (behind F5)

Environment (docker-compose/env vars):

KONG_PROXY_LISTEN=0.0.0.0:8000
# Trust only your F5 addresses/CIDRs (do NOT trust 0.0.0.0/0)
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,<F5_VIP_or_SNATs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on
# helpful during testing
KONG_HEADERS=latency_tokens
# health endpoint for F5 monitor (internal only)
KONG_STATUS_LISTEN=0.0.0.0:8100

F5 health monitor target (HTTP):

  • URL: http://<kong-node>:8100/status
  • Expect: 200 (Kong’s status port; safe to expose internally)

Define Service/Route (example):

# Service to your upstream (HTTP)
curl -sX POST :8001/services \
  -d name=orders-svc \
  -d host=tomcat-app \
  -d port=8080 \
  -d protocol=http

# Route that matches your API path/host
curl -sX POST :8001/routes \
  -d service.name=orders-svc \
  -d hosts[]=api.example.com \
  -d paths[]=/v1

(If upstream is HTTPS, set protocol=https, port=443, and sni=upstream.host.)


4) Putting it together (request path)

  1. Clienthttps://api.example.com/v1/...
  2. Cloudflare terminates TLS, adds CF-Connecting-IP, forwards to F5.
  3. F5 validates CF client cert (AOP), appends XFF, sets X-Forwarded-Proto:https, LB to a Kong node.
  4. Kong trusts F5 IP, extracts the real client IP from XFF, matches Route (Host=api.example.com, path=/v1), invokes Service, proxies to Upstream.
  5. Response flows back; Kong adds latency headers (if enabled); F5 returns to CF; CF returns to client.

5) Testing (in layers)

  • Direct to Kong (bypass CF/F5) on the private network: curl -i -H 'Host: api.example.com' http://<kong-node>:8000/v1/ping -H 'Kong-Debug: 1'
  • Through F5 VIP (simulate Cloudflare off): curl -i -H 'Host: api.example.com' https://<f5-public-ip>/v1/ping --resolve api.example.com:<f5-public-ip>:443
  • Through Cloudflare (real path): curl -i https://api.example.com/v1/ping -H 'Kong-Debug: 1'

Check headers:

  • X-Kong-Upstream-Latency (upstream time)
  • X-Kong-Proxy-Latency (Kong internal)
  • Logs in Kong show client_ip = real client IP (not F5/CF) if trusted IPs are correct.

6) Observability (recommended)

  • Kong plugins:
    • prometheus (metrics), correlation-id (X-Request-ID), http-log or file-log → ELK.
  • F5: request logging profile to ELK; LTM stats.
  • Cloudflare: Security events/Edge logs (if plan allows).

7) Common gotchas (and quick fixes)

  • Client IP shows F5/Cloudflare → set KONG_TRUSTED_IPS, REAL_IP_*; ensure F5 appends XFF and passes CF-Connecting-IP.
  • OIDC header lost → make sure F5 policy does not strip Authorization.
  • Wrong redirects / mixed content → ensure X-Forwarded-Proto: https reaches Kong & upstream.
  • “Server busy” / spikes → raise F5 OneConnect pool, ensure Kong backlog/somaxconn, keepalive to upstream.
  • Health checks hitting Admin API → use KONG_STATUS_LISTEN (8100) instead, not port 8001 (Admin).

If you want, I can:

  • generate a Kong docker-compose block with those envs,
  • give you an F5 monitor config snippet,
  • and add a sample OIDC plugin + rate-limit plugin to the route for a production-like baseline.

Cloudflare → F5 → Kong API stack

Here’s the clean mental model for a Cloudflare → F5 → Kong API stack, plus the key headers/TLS choices and the minimum config knobs so it “just works.”

1) The traffic flow (happy path)

Client ─(HTTPS/HTTP2/3)→ Cloudflare ─(TLS to origin)→ F5 VIP ─(HTTP/HTTPS)→ Kong cluster ─→ Upstream services

  1. Cloudflare terminates the client TLS (always, if the orange-cloud proxy is on).
    • Applies WAF, DDoS, bot rules, rate limits, geo rules, etc.
    • Forwards to your F5 VIP as the “origin”.
  2. F5 LTM receives Cloudflare’s request.
    • Usually terminates TLS again (re-encrypt to Kong or send plain HTTP on the inside).
    • Load-balances across Kong nodes (pool members in Zone A/B).
  3. Kong Gateway routes by Host/path to your backend (service), runs plugins (OIDC, rate-limit, etc.), and proxies to the upstream.

2) TLS choices (pick one per hop)

Cloudflare → F5 (origin TLS):

  • Set Cloudflare SSL mode to Full (strict).
  • Enable Authenticated Origin Pull so only Cloudflare can hit F5.
  • On F5, trust Cloudflare’s Origin Pull CA and require client cert.

F5 → Kong:

  • Simple: terminate on F5 and send HTTP to Kong on the private VLAN.
  • End-to-end TLS: client-SSL on F5, server-SSL from F5 to Kong (re-encrypt), SNI kong.internal (or node name).

Kong → Upstream:

  • Match your upstream: protocol=http|https, SNI if TLS, optionally mTLS to sensitive services.

3) Real client IP (do this or logs/limits will be wrong)

Cloudflare sets:

  • CF-Connecting-IP (client IP)
  • X-Forwarded-For (appends client IP)
  • X-Forwarded-Proto: https

F5 should preserve (not overwrite) X-Forwarded-For and pass X-Forwarded-Proto.

Kong must trust the proxy chain so it can compute the real client IP:

  • Set (env or kong.conf):
    • KONG_TRUSTED_IPS=<F5 private CIDRs or F5 VIPs> (don’t trust 0.0.0.0/0)
    • KONG_REAL_IP_HEADER=X-Forwarded-For
    • KONG_REAL_IP_RECURSIVE=on
  • Then client_ip in logs, rate-limit/correlation will be the actual user IP from Cloudflare.

If you prefer using Cloudflare’s header explicitly, you can have F5 copy CF-Connecting-IP into the leftmost position of X-Forwarded-For.

4) Load balancing & health checks (avoid double “mystery” failover)

  • Cloudflare (optional LB): usually point it at a single F5 VIP per region and let F5 do node health.
  • F5 → Kong nodes: HTTP health monitor (e.g., GET /status/health on each Kong).
  • Kong → upstreams: use Kong Upstreams/Targets with active + passive health checks to eject bad app pods.

Pick one layer to be the source of truth per hop (Cloudflare LB or F5, Kong or upstream LB) to avoid contradictory decisions.

5) Protocols & connections

  • HTTP versions: client can be HTTP/2 or HTTP/3 to Cloudflare. Cloudflare→F5 is HTTP/1.1 or HTTP/2 (CF may downgrade). F5→Kong is typically HTTP/1.1.
  • Keep-alive: enable OneConnect on F5 and keep-alive to Kong to avoid connection churn.
  • WebSockets/gRPC: supported end-to-end; ensure Upgrade/HTTP2 is enabled through F5 and Kong Routes/Services.

6) Minimal config snippets

F5 (HTTP profile / header handling):

  • Enable “Insert X-Forwarded-For” (or an iRule to append not overwrite).
  • Preserve X-Forwarded-Proto = https.
  • If using Authenticated Origin Pull: client-SSL requires CF client cert; trust CF Origin CA.

Kong (env):

KONG_HEADERS=latency_tokens
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16, <your F5 VIPs/CIDRs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on

Kong Service (HTTPS upstream with SNI):

protocol: https
host: api.internal.example
port: 443
tls_verify: true
sni: api.internal.example

Kong rate limiting behind proxies: use consumer or ip policy; with trusted IPs set, ip uses the real client.

7) Observability (what to turn on)

  • Cloudflare: (if plan allows) Edge logs / Security events (attack/waf/bot).
  • F5: LTM logs + request logging profiles; export to ELK.
  • Kong: enable Prometheus plugin, correlation-id, and http-log/file-log to ELK.
  • Make sure your ELK/Loki sees client_ip, X-Request-ID, service, route, upstream_status, latencies.{kong,proxy,request}.

8) Common pitfalls & quick fixes

  • Client IP shows F5/Cloudflare: set KONG_TRUSTED_IPS and KONG_REAL_IP_* as above; ensure F5 appends—not overwrites—X-Forwarded-For.
  • OIDC 401s behind CF/F5: your LB stripped Authorization. Validate with an echo upstream; ensure F5 leaves Authorization intact.
  • Mixed-content / wrong redirects: preserve X-Forwarded-Proto: https through to Kong and upstream so apps build HTTPS URLs.
  • Header too large (Set-Cookie): raise Kong Nginx proxy buffers if you see upstream sent too big header.
  • Double LB flaps: don’t health-check the same hop in both Cloudflare and F5 with different intervals/paths.

9) Example end-to-end (typical, secure)

  1. Cloudflare Full (strict) + Authenticated Origin Pull → F5 public VIP api.example.com:443.
  2. F5 terminates TLS, adds XFF if missing, passes X-Forwarded-Proto: https, LB to kongA:8000, kongB:8000.
  3. Kong trusts F5 IPs, pulls real client IP from XFF, runs OIDC/rate-limit, sends to upstream (HTTPS with SNI).
  4. Logs/metrics from all 3 land in ELK/Grafana.

If you want, I can drop a ready-to-run Kong docker-compose with those envs set, plus example F5 monitor path and a Kong Service/Route pointing to your php/tomcat containers.