latency

That message comes from Nginx (which Kong runs on). It means the upstream response didn’t fit in memory buffers, so Nginx spooled it to disk under /usr/local/kong/proxy_temp. It’s not fatal, but it adds I/O and latency.

Here are practical fixes—pick what matches your API pattern.

What you can do

1) Quick stopgap: don’t write to disk

Prevents temp-file writes; response still buffered in memory.

proxy_buffering on;
proxy_max_temp_file_size 0;

2) Stream responses (no buffering at all)

Great for large downloads/streaming; reduces latency & disk I/O. (Backpressure goes directly to upstream.)

proxy_buffering off;

3) Increase memory buffers (keep buffering, avoid disk)

Size these to your typical response size and concurrency.

proxy_buffering on;
proxy_buffer_size 64k;        # header/first buffer
proxy_buffers 32 64k;         # total ~2 MB per connection here
proxy_busy_buffers_size 256k; # busy threshold before spooling

4) If you must spool, make it fast

Put Kong’s proxy temp on tmpfs or faster disk:

  • Mount /usr/local/kong/proxy_temp on tmpfs (container/VM)
  • Or move it via: proxy_temp_path /usr/local/kong/proxy_temp 1 2;

5) Tame slow clients (common cause of spooling)

Slow downloads force Nginx to hold data. Tighten/adjust:

send_timeout 30s;
tcp_nodelay on;
keepalive_timeout 65s;

(Shorter timeouts reduce long-lived slow sends.)

How to set these in Kong

Pick one method you already use:

A) With kong.conf

Add proxy-level directives (no need to rebuild Kong):

nginx_proxy_directives = 
  proxy_buffering=on,
  proxy_max_temp_file_size=0,
  proxy_buffer_size=64k,
  proxy_buffers=32 64k,
  proxy_busy_buffers_size=256k

(Comma-separated key=value pairs; restart/reload Kong.)

B) With environment variables (containers)

KONG_NGINX_PROXY_DIRECTIVES=proxy_buffering=on,proxy_max_temp_file_size=0,proxy_buffer_size=64k,proxy_buffers=32 64k,proxy_busy_buffers_size=256k

C) With a custom Nginx template

If you already manage an nginx-kong.conf template, put the directives in the proxy context and redeploy.

D) Helm (Kubernetes)

If using the Kong Helm chart, set the proxy directives block (values key often named like proxy.nginx.* or nginxProxyDirectives) to include the same directives. (Exact key names vary by chart version—mirror the options above.)

What to choose

  • APIs returning big files / streamingproxy_buffering off;
  • APIs with occasional large JSON → keep buffering on and increase proxy_buffer_* sizes; also set proxy_max_temp_file_size 0;
  • Mobile or very slow clients → consider buffering on + larger buffers, or keep temp on tmpfs

Validate

  1. Hit the route and watch /usr/local/kong/proxy_temp usage.
  2. Check error log: the “buffered to a temporary file” line should stop.
  3. Watch p95/p99 request latency; it usually drops when disk spooling stops.

If you share your response size range and traffic pattern (download/JSON/streaming), I’ll tune exact buffer counts/sizes so you don’t overspend RAM.

500 errors

Best practice for 500 errors

  • Enable error.log (debug) in Kong.
  • Use file-log or http-log plugin to capture structured traffic.
  • Use correlation-id plugin to align Kong + upstream logs.
  • Always check upstream service logs — a 500 originates there, not in Kong.

Kong automatically adds X-Kong-Request-ID (if enabled).

  • Add plugin: correlation-id
  • Configure a header (e.g. X-Request-ID) so you can trace across Kong logs, upstream app logs, and client logs.

API – response time

Here are fast, reliable ways to measure client-side API response time (and break it down) — from your laptop or from an EKS pod.

1) One-shot timing (curl)

This prints DNS, TCP, TLS, TTFB, and Total in one go:

curl -s -o /dev/null -w '
{ "http_code":%{http_code},
  "remote_ip":"%{remote_ip}",
  "dns":%{time_namelookup},
  "tcp":%{time_connect},
  "tls":%{time_appconnect},
  "ttfb":%{time_starttransfer},
  "total":%{time_total},
  "size":%{size_download},
  "speed":%{speed_download}
}
' https://api.example.com/path

Fields

  • dns: DNS lookup
  • tcp: TCP connect
  • tls: TLS handshake (0 if HTTP)
  • ttfb: time to first byte (request→first response byte)
  • total: full download time

2) From EKS (ephemeral pod)

Run N samples and capture a CSV:

kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
sh -c 'for i in $(seq 1 50); do \
  curl -s -o /dev/null -w "%{time_namelookup},%{time_connect},%{time_appconnect},%{time_starttransfer},%{time_total}\n" \
  https://api.example.com/health; \
done' > timings.csv

Open timings.csv and look at columns: dns,tcp,tls,ttfb,total. Large ttfb means slow upstream/app; big tls means handshake issues; big gap total - ttfb means payload/download time.

3) Separate proxy vs upstream (Kong in the path)

Kong adds latency headers you can read on the client:

curl -i https://api.example.com/path | sed -n 's/^\(x-kong-.*latency\): \(.*\)$/\1: \2/p'
# x-kong-proxy-latency: <ms>   (Kong → upstream start)
# x-kong-upstream-latency: <ms> (Upstream processing)

These help you see if delay is at the gateway or in the service.

4) Quick load/percentiles (pick one)

  • hey hey -z 30s -c 20 https://api.example.com/path
  • vegeta echo "GET https://api.example.com/path" | vegeta attack -rate=20 -duration=30s | vegeta report
  • k6 (scriptable) // save as test.js import http from 'k6/http'; import { check } from 'k6'; export const options = { vus: 20, duration: '30s', thresholds: { http_req_duration: ['p(95)<300'] } }; export default () => { const r = http.get('https://api.example.com/path'); check(r, { '200': (res)=>res.status===200 }); }; Run: k6 run test.js

5) App-level timers (optional)

Add a Server-Timing header from the API to expose your own phase timings (DB, cache, etc.). Then the client can read those headers to correlate.

6) Common gotchas

  • Proxies can add latency; test both with and without proxy (NO_PROXY / --proxy).
  • Auth: measure with real headers/tokens; 401/403 will skew.
  • SNI/Host: if hitting by IP, use --resolve host:443:IP -H "Host: host" so cert/routing is correct.
  • Warmup: discard first few samples (JIT, caches, TLS session reuse).

If you want, share a few curl -w outputs from local vs EKS and I’ll pinpoint where the time is going (DNS/TLS/TTFB/payload).

tcpdummp

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

403

Short version: the cipher suite isn’t why you’re getting 403. A 403 is an authorization/ACL/WAF decision at L7. The TLS differences you saw (TLS 1.3 locally vs TLS 1.2 on EKS) are fine and expected depending on your load balancer policy.

What to do first (to find the real 403 source)

  1. Hit the EKS URL with verbose curl and look at headers:
curl -i -v https://your.domain/path

  • If the response headers look AWS-y and you see it immediately, the 403 likely comes from ALB/WAF.
  • If it reaches your pod/ingress and then 403s, it’s from Ingress/Kong/app (e.g., missing Authorization, IP allowlist, route rule, JWT/OIDC plugin, etc.).
  1. Check ALB access logs for the request:
  • If elb_status_code = 403, ALB/WAF blocked it (WAF rules often show up as 403).
  • If elb_status_code = 200 and target_status_code = 403, your target/app returned 403. (AWS Documentation)

Recommended TLS settings (good practice, not the 403 fix)

If you terminate TLS on an AWS ALB (via AWS Load Balancer Controller)

Use a modern security policy that enables TLS 1.3 and falls back to TLS 1.2:

metadata:
  annotations:
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06

That policy offers TLS 1.3 and 1.2 with strong ciphers. (If you truly want TLS 1.3 only: ELBSecurityPolicy-TLS13-1-3-2021-06.) (AWS Documentation, Kubernetes SIGs)

Tip: If you need maximum client compatibility, attach both an RSA and an ECDSA ACM cert (ALB supports multiple certs):
alb.ingress.kubernetes.io/certificate-arn: arn:...rsa,arn:...ecdsa. (Kubernetes SIGs)

If you terminate TLS on an NLB (TLS listener)

Pick a TLS 1.3 policy on the listener; NLB supports it and will use appropriate backend defaults. (AWS Documentation)

If you terminate TLS on ingress-nginx inside the cluster

Use TLS 1.2 + 1.3 and a modern cipher list:

# ConfigMap for ingress-nginx controller
data:
  ssl-protocols: "TLSv1.2 TLSv1.3"
  ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305"
  ssl-prefer-server-ciphers: "off"

(Those are the ingress-nginx docs’ modern defaults; TLS 1.3 ciphers are implicit.) (kubernetes.github.io)


Why your TLS info looks different

  • Local: likely direct to your app or local reverse proxy → negotiates TLS 1.3 (TLS_AES_128_GCM_SHA256 is one of the three TLS 1.3 ciphers).
  • EKS: you’re probably terminating at ALB (or an older policy) → negotiated TLS 1.2 (TLS_ECDHE_ECDSA_WITH_AES_128_…). That’s still strong if you use a modern ALB policy. (AWS Documentation)

Quick 403 triage checklist

  • AWS WAF attached? Check WAF logs for the rule ID; 403s here are common and configurable. (Repost, Amazon Web Services, Inc.)
  • Auth headers missing? Ingress/controller or a proxy hop might be stripping/renaming Authorization. Confirm with curl -v -H "Authorization: Bearer …".
  • Host/Path mismatch? Ingress host: and path rules must match exactly; wrong Host yields 4xx at ingress.
  • IP allowlist / Geo rules? Compare client IP vs policy.
  • Kong/OIDC/JWT plugin? Kong often 403s on invalid/absent token or unauthorized consumer; check Kong logs for the route and plugin decision.
  • CORS preflight? Browsers can show 403 on failed OPTIONS—ensure ingress routes OPTIONS.

If you share:

  • one ALB access log line for a blocked request, and/or
  • the response headers from curl -i -v on EKS,

I’ll tell you precisely whether the 403 is from ALB/WAF or from your Ingress/Kong/app—and the exact knob to turn.


: locally you can call the on-prem API just fine, but from EKS you get 403.
Handshake is OK (so network + TLS basics are fine). This is almost never a cipher issue—403 is an L7 decision (WAF/edge/Kong/app/proxy policy).

Here’s a tight plan to pinpoint and fix it.

1) Identify who is returning the 403 (one command from EKS)

Run from an EKS pod (ephemeral curl pod shown below) and inspect headers:

kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
  sh -c 'curl -i -v https://your.onprem.host/path -H "Authorization: Bearer $TOKEN" --max-time 20'

Look for these telltales in the 403 response:

  • Cloudflare: server: cloudflare, cf-ray, cf-cache-status
  • F5: server: BigIP or ASM headers like X-WA-Info
  • Kong: server: kong, x-kong-*
  • Proxy (corp/Zscaler/etc.): Via, X-BlueCoat-Via, custom proxy headers
  • Your app/ingress: app-specific headers, or none

If your pod must use an egress proxy, also try explicitly through it:

kubectl run curlp --rm -it --image=curlimages/curl:8.8.0 -- \
 sh -c 'curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT --max-time 20'

2) Fix by source

A) WAF/Edge (Cloudflare/F5) is denying EKS

  • Likely IP allowlist/geo/ASN or bot rule.
    Fix: Allowlist your EKS NAT/Egress IP(s) or relax/adjust the specific WAF rule; confirm in WAF/Firewall logs.

B) Corporate egress proxy is denying

  • From EKS, traffic goes via a corp proxy that enforces access policy or strips Authorization.
    Fix:
    • Whitelist the destination domain on the proxy, or
    • Set NO_PROXY=your.onprem.host on the workload (if policy allows), or
    • Ensure the proxy allows the path and forwards all headers (esp. Authorization).
    • If the proxy does TLS inspection + your server expects client mTLS, either bypass inspection for this domain or use a two-hop mTLS model.

C) Kong/Ingress/app is denying

  • Common causes: missing/invalid token, ACL/IP allowlist plugin, wrong Host header/SNI, or CORS preflight.
    Fix:
    • Verify the request from EKS includes the same auth headers as local (env var or secret may be missing).
    • If calling by IP, set the correct Host header:
      curl -H "Host: your.onprem.host" --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/...
    • Check Kong logs for route/plugin decision (look for consumer, route, service, plugin names).

3) Quick differentials to run from EKS

  • Compare with and without proxy env:
# Without proxy (if allowed)
curl -i -v https://your.onprem.host/path

# Via proxy
curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT

  • Preserve Host/SNI if you must target an IP:
curl -i -v --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/path

  • Include your real auth:
curl -i -v -H "Authorization: Bearer $TOKEN" https://your.onprem.host/path

  • See who really answered:
    Check Server, Via, cf-ray, x-kong*, X-Envoy-*, etc., in the 403 response.

4) About ciphers/TLS

Use modern policies that allow TLS 1.3 + 1.2 with ECDHE + AES-GCM/CHACHA. The exact cipher (e.g., TLSv1.3 TLS_AES_128_GCM_SHA256 locally vs TLSv1.2 ECDHE_*_AES_128_GCM_SHA256 from EKS) will not cause a 403. It’s fine to keep both TLS 1.2 and 1.3 enabled; the fix lies in the access/authorization layer, not the cipher.


If you paste the 403 response headers from the EKS curl (no body needed), I can tell you in one shot whether it’s WAF/edge, proxy, Kong, or your app, and give the exact knob to turn (allowlist, header forward, plugin, or bypass).

cert error

That error means the client sees a cert chain that includes a CA it doesn’t trust—most commonly a corporate proxy’s self-signed root (TLS inspection) or a misconfigured server chain. Here’s the fastest way to fix it given your AWS client + proxy setup.

0) Quick proof (run behind the proxy)

# Shows who signed what the client actually sees
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts -verify_return_error </dev/null \
| openssl x509 -noout -subject -issuer -enddate

  • If the issuer is your company/Zscaler/BlueCoat/etc → it’s proxy interception.
  • If the issuer is a normal public CA but the chain still fails → your server/edge isn’t sending the proper intermediates.

how to integrate token in java call

Gotcha—here are quick, copy-paste ways to add an OIDC/JWT Bearer token to Java HTTP calls. Pick the stack you use.


Java 11+ HttpClient

var client = java.net.http.HttpClient.newHttpClient();
String token = "eyJ..."; // put your access token here

var req = java.net.http.HttpRequest.newBuilder()
    .uri(URI.create("https://api.example.com/resource"))
    .header("Authorization", "Bearer " + token)
    .header("Content-Type", "application/json")
    .POST(java.net.http.HttpRequest.BodyPublishers.ofString("{\"ping\":\"pong\"}"))
    .build();

var res = client.send(req, java.net.http.HttpResponse.BodyHandlers.ofString());
System.out.println(res.statusCode() + " " + res.body());


OkHttp (popular + simple)

OkHttpClient client = new OkHttpClient.Builder()
    .addInterceptor(chain -> {
        String token = "eyJ..."; // inject per request or from a provider
        Request req = chain.request().newBuilder()
            .addHeader("Authorization", "Bearer " + token)
            .build();
        return chain.proceed(req);
    })
    .build();

Request request = new Request.Builder()
    .url("https://api.example.com/resource")
    .post(RequestBody.create("{\"ping\":\"pong\"}", MediaType.get("application/json")))
    .build();

try (Response resp = client.newCall(request).execute()) {
    System.out.println(resp.code());
}


Apache HttpClient 5

var httpClient = HttpClients.custom()
    .addRequestInterceptorFirst((HttpRequest request, EntityDetails entity, HttpContext ctx) -> {
        request.addHeader("Authorization", "Bearer " + "eyJ...");
    })
    .build();

var post = new HttpPost("https://api.example.com/resource");
post.setHeader(HttpHeaders.CONTENT_TYPE, "application/json");
post.setEntity(new StringEntity("{\"ping\":\"pong\"}", StandardCharsets.UTF_8));

try (var resp = httpClient.execute(post)) {
    System.out.println(resp.getCode());
}


Spring (WebClient) — preferred in Spring Boot

@Bean
WebClient webClient() {
  return WebClient.builder()
      .filter((req, next) -> {
        String token = "eyJ..."; // inject from a bean that caches/refreshes
        ClientRequest r = ClientRequest.from(req)
            .header(HttpHeaders.AUTHORIZATION, "Bearer " + token).build();
        return next.exchange(r);
      })
      .build();
}

// use it
webClient().post().uri("https://api.example.com/resource")
  .contentType(MediaType.APPLICATION_JSON)
  .bodyValue(Map.of("ping","pong"))
  .retrieve().toEntity(String.class).block();


Spring (RestTemplate)

RestTemplate rt = new RestTemplate();
rt.getInterceptors().add((req, body, ex) -> {
  req.getHeaders().setBearerAuth("eyJ...");
  return ex.execute(req, body);
});
ResponseEntity<String> resp = rt.getForEntity("https://api.example.com/resource", String.class);


Feign (OpenFeign)

@Bean
public RequestInterceptor bearerAuth() {
  return template -> template.header("Authorization", "Bearer " + "eyJ...");
}


JAX-WS / SOAP (header example)

SOAP 1.1 often also needs SOAPAction, but the Bearer goes in HTTP headers:

BindingProvider bp = (BindingProvider) port;
Map<String, List<String>> headers = new HashMap<>();
headers.put("Authorization", List.of("Bearer eyJ..."));
bp.getRequestContext().put(MessageContext.HTTP_REQUEST_HEADERS, headers);


Getting the token (Ping/OIDC) in Java (client-credentials)

var client = HttpClient.newHttpClient();
var form = URLEncoder.encode("grant_type","UTF-8") + "=client_credentials" +
           "&" + URLEncoder.encode("client_id","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_ID"),"UTF-8") +
           "&" + URLEncoder.encode("client_secret","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_SECRET"),"UTF-8");

var req = HttpRequest.newBuilder(URI.create("https://idp.example.com/oauth2/token"))
    .header("Content-Type", "application/x-www-form-urlencoded")
    .POST(HttpRequest.BodyPublishers.ofString(form))
    .build();

var res = client.send(req, HttpResponse.BodyHandlers.ofString());
String token = new org.json.JSONObject(res.body()).getString("access_token");


Pro tips (Kong/Ping friendly)

  • Always send Authorization: Bearer <token> (no quotes, single space).
  • Handle 401 by refreshing the token (cache access_token + expires_in).
  • For Cloudflare/ALB in front, ensure they don’t strip Authorization.
  • If you need mTLS as well, add your keystore/truststore to the HTTP client config; the Bearer header stays the same.

If you tell me which client you’re using (Spring WebClient, RestTemplate, OkHttp, Apache, or pure Java 11) and how you obtain tokens (client-credentials vs user flow), I’ll tailor a tiny reusable “TokenProvider” + interceptor for you.

Kong – proxy issue

Got it—your client runs in AWS and must use a corporate proxy. With mTLS, a client-side proxy can absolutely be the culprit. Here’s the playbook.

What’s happening

  • If the proxy does TLS inspection (MITM), it terminates TLS and re-signs with its own CA. Your server asks the proxy (not the real client) for a cert → it has none → server logs “trying to obtain a certificate from the client.”
  • The client may also see “unable to get local issuer certificate” because it’s validating the proxy’s substituted cert but doesn’t trust the proxy’s Root CA.

Decide the path (pick one)

A) Allow end-to-end mTLS (best):
Ask the proxy admins to bypass SSL inspection for your domain (add it to the proxy’s TLS bypass list) or ensure it does pure CONNECT tunneling. Then the real client cert reaches your server.

B) Two-hop mTLS (enterprise pattern):

  • Client ↔ Proxy: mTLS using Client-Cert-#1 (issued by proxy’s CA).
  • Proxy ↔ Your Server: mTLS using Client-Cert-#2 (issued by a CA your server trusts).
    Your server will authenticate the proxy’s identity, not the original client. If you need end-user identity, have the proxy forward vetted identity (headers/JWT) and verify it.

C) Temporary test bypass:
On that AWS instance, set NO_PROXY for your hostname if policy allows, and open egress 443 to your server to confirm the issue is the proxy.

Quick diagnostics (run both)

1) See if proxy is intercepting

# Behind the proxy
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts </dev/null | \
  openssl x509 -noout -issuer -subject -enddate

Compare issuer/subject to the same command from a machine without the proxy.
If issuers differ (e.g., “Zscaler/BlueCoat/Corp Root CA”), it’s inspecting.

2) Prove mTLS works through a tunnel

# If proxy supports pure CONNECT (no inspection)
curl -v https://YOUR_HOST/ \
  --proxy http://PROXY_HOST:PROXY_PORT \
  --cert client.crt --key client.key \
  --cacert server_ca_chain.pem

  • If your proxy itself is HTTPS, add: --proxy-cacert PROXY_ROOT_CA.pem.

Java client notes (common in AWS)

# Client cert for mTLS to your server
-Djavax.net.ssl.keyStore=/path/client.p12 -Djavax.net.ssl.keyStorePassword=***** -Djavax.net.ssl.keyStoreType=PKCS12
# Trust your server (or proxy if inspecting)
-Djavax.net.ssl.trustStore=/path/truststore.jks -Djavax.net.ssl.trustStorePassword=*****
# Proxy settings (CONNECT)
-Dhttps.proxyHost=PROXY_HOST -Dhttps.proxyPort=PROXY_PORT
# Bypass (if allowed)
-Dhttp.nonProxyHosts="|YOUR_HOST|*.yourdomain.com"

If inspection can’t be bypassed, you must import the proxy’s Root CA into the truststore, and if mTLS is required by your server, implement the two-hop mTLS model above.

What to send to the proxy team

  • “Please bypass TLS inspection (or allow pure CONNECT) for YOUR_HOST:443.”
  • If not possible: “We’ll run two-hop mTLS. We’ll trust your proxy’s client CA for the proxy→server leg; you’ll trust ours for client→proxy.”

If you share the outputs of the two openssl/curl tests (redact hostnames as needed), I’ll tell you exactly whether it’s inspection vs tunneling and the minimal change to make it work.

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

unable to get local issuer certificate” error 0 proxy

Yes — if the client is behind a proxy, that can definitely cause the “unable to get local issuer certificate” error.

Here’s why:


1. Intercepting / SSL inspection proxy

  • Some proxies (corporate firewalls, Zscaler, BlueCoat, Palo Alto, etc.) terminate TLS, then re-sign the traffic with their own internal CA.
  • The client sees a cert issued by that proxy CA, not the real site.
  • If the client’s trust store doesn’t include that proxy’s root CA, validation fails → “unable to get local issuer certificate”.

2. Forward proxy (CONNECT tunnel)

  • If the proxy just forwards raw TLS (no inspection), usually no issue.
  • But if the proxy alters headers (Host/SNI) or drops part of the certificate chain, the server’s cert may not validate correctly.

3. Transparent proxy / middlebox

  • Some devices “peek” into TLS but don’t pass along the full chain → clients only see the leaf cert.
  • That also leads to the issuer validation error.

How to confirm if proxy is the problem

Run openssl s_client from behind the proxy and compare to running it directly (bypassing proxy):

openssl s_client -connect example.com:443 -servername example.com -showcerts

If the issuer/subject of the leaf cert changes (e.g. you see “Zscaler Root CA” instead of “Let’s Encrypt”), then the proxy is doing TLS interception.


Fixes if proxy is cause

  • Trusted root CA: import the proxy’s root/intermediate CA into the client trust store (system trust, Java cacerts, browser trust store, etc.).
  • Bypass proxy for domains where you need direct trust (if allowed by policy).
  • Update proxy config so it presents the full chain correctly.

👉 Do you want me to show you exactly how to check if your client’s proxy is re-signing the cert (step-by-step with openssl and comparing cert subjects)?

unable to get local issuer certificate

unable to get local issuer certificate” means the TLS verifier can’t build a valid trust chain from the server’s cert → a trusted root. In plain English: the server didn’t send the right intermediate(s), or it’s using a CA your client doesn’t trust (e.g., a private CA or a Cloudflare Origin Certificate seen directly).

Here’s how to fix it fast in a Cloudflare → F5 → Kong → Upstream stack.


1) Identify which hop is failing

Run these from your laptop (replace host):unable to get local issuer certificate

# Client → Cloudflare (public hostname)
openssl s_client -showcerts -servername api.example.com -connect api.example.com:443 </dev/null | sed -n '1,/-END CERTIFICATE-/p'

# Client → F5 (bypass Cloudflare: hit the VIP IP)
openssl s_client -showcerts -connect <F5_PUBLIC_IP>:443 </dev/null

# Direct to Kong node (if it terminates TLS)
openssl s_client -showcerts -connect <KONG_NODE_IP>:8443 </dev/null

# From inside Kong to the upstream (what Kong sees)
docker exec -it kong sh -lc 'apk add --no-cache openssl >/dev/null 2>&1 || true; \
  openssl s_client -showcerts -servername <UPSTREAM_HOST> -connect <UPSTREAM_HOST>:443 </dev/null'

Look at:

  • The presented chain (server cert + intermediates)
  • Verify return code: (should be 0 (ok))

2) Common root causes & specific fixes

A) Missing intermediate on your origin (F5/Kong/upstream)

Symptom: OpenSSL shows only the server cert, or depth=0 ok then fails at depth=1 with this error.

Fix:

  • F5: import the intermediate CA that issued your server cert and set it as the Chain in the Client SSL Profile bound to your VIP (Certificate + Key + Chain). Ensure the chain is complete (server → intermediate(s) → root).
  • Kong terminating TLS (TLS Ingress on 8443): configure cert + key using the full chain bundle (server + intermediates). Most proxies require fullchain.pem (not just cert.pem).
  • Upstream service: install a proper cert and serve the intermediates.

B) You used a Cloudflare Origin Certificate and connected directly to F5

Cloudflare Origin Certs are only trusted by Cloudflare, not by browsers/curl.
Symptom: Works when proxied through Cloudflare (orange cloud), fails when you hit the F5 IP directly.

Fix options:

  • Keep traffic proxied through Cloudflare (don’t bypass); or
  • Install a publicly trusted cert on F5 (e.g., Let’s Encrypt) for direct access.

C) Kong → Upstream uses an internal/private CA

Symptom: Client to Kong is fine; Kong logs show upstream TLS verify errors, or your route returns 500/502.

Fix (Kong Admin API):

  1. Upload your internal CA:
curl -s -X POST :8001/ca_certificates -F cert=@corp-root-or-intermediate.pem
# -> returns {"id":"<CA_ID>", ...}

  1. Attach it to the Service and enforce verification:
curl -s -X PATCH :8001/services/<service_id> \
  -d tls_verify=true \
  -d tls_verify_depth=2 \
  -d ca_certificates[]='<CA_ID>'

(Temporary only: you can set tls_verify=false to prove that CA trust is the issue—then turn it back on with the proper CA.)

D) Wrong SNI / Hostname mismatch

If the cert is for api.example.com but the TLS handshake SNI is different, validation fails.

  • Ensure F5 sends correct SNI when it re-encrypts to Kong/upstream.
  • In Kong Service for HTTPS upstreams, set sni=<upstream-cert-hostname>.

E) Old client trust store / corporate MITM

  • Older clients may lack newer roots (update trust store).
  • If a corporate proxy is intercepting TLS, clients need that proxy’s root in their trust store.

3) Quick “good config” checklist per hop

Cloudflare → F5

  • SSL mode: Full (strict).
  • (Optional) Authenticated Origin Pull: F5 must trust CF Origin Pull CA and require client cert.
  • F5 presents a cert chain your clients would trust when you test direct (unless you never hit it directly).

F5 → Kong

  • If re-encrypting: Server SSL profile with proper SNI to Kong (if Kong expects it). Otherwise send HTTP on a private VLAN to keep it simple.
  • Preserve headers (X-Forwarded-Proto, X-Forwarded-For).

Kong → Upstream

  • For HTTPS upstreams:
    • protocol=https, port=443, sni=<host>, tls_verify=true
    • Attach internal CA certificate if upstream uses a private CA (see step 2C).

4) Don’t “fix” by disabling verification (except to isolate)

  • curl -k, Node’s NODE_TLS_REJECT_UNAUTHORIZED=0, Java -Dcom.sun.net.ssl.checkRevocation=false—use only for a one-off test. The real fix is serving the right chain or installing the proper CA.

5) One-command validator you can paste in tickets

openssl s_client -connect api.example.com:443 -servername api.example.com -verify_return_error </dev/null

You’ll get a clear reason (missing issuer / self-signed / hostname mismatch).


If you tell me which endpoint shows the error (Cloudflare URL, direct F5 VIP, Kong 8443, or Kong→Upstream) and paste the first ~40 lines of the openssl s_client output, I’ll point to the exact certificate/chain file you need to fix.