Short version: the cipher suite isn’t why you’re getting 403. A 403 is an authorization/ACL/WAF decision at L7. The TLS differences you saw (TLS 1.3 locally vs TLS 1.2 on EKS) are fine and expected depending on your load balancer policy.
What to do first (to find the real 403 source)
- Hit the EKS URL with verbose curl and look at headers:
curl -i -v https://your.domain/path
- If the response headers look AWS-y and you see it immediately, the 403 likely comes from ALB/WAF.
- If it reaches your pod/ingress and then 403s, it’s from Ingress/Kong/app (e.g., missing
Authorization, IP allowlist, route rule, JWT/OIDC plugin, etc.).
- Check ALB access logs for the request:
- If
elb_status_code = 403, ALB/WAF blocked it (WAF rules often show up as 403). - If
elb_status_code = 200andtarget_status_code = 403, your target/app returned 403. (AWS Documentation)
Recommended TLS settings (good practice, not the 403 fix)
If you terminate TLS on an AWS ALB (via AWS Load Balancer Controller)
Use a modern security policy that enables TLS 1.3 and falls back to TLS 1.2:
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
That policy offers TLS 1.3 and 1.2 with strong ciphers. (If you truly want TLS 1.3 only: ELBSecurityPolicy-TLS13-1-3-2021-06.) (AWS Documentation, Kubernetes SIGs)
Tip: If you need maximum client compatibility, attach both an RSA and an ECDSA ACM cert (ALB supports multiple certs):alb.ingress.kubernetes.io/certificate-arn: arn:...rsa,arn:...ecdsa. (Kubernetes SIGs)
If you terminate TLS on an NLB (TLS listener)
Pick a TLS 1.3 policy on the listener; NLB supports it and will use appropriate backend defaults. (AWS Documentation)
If you terminate TLS on ingress-nginx inside the cluster
Use TLS 1.2 + 1.3 and a modern cipher list:
# ConfigMap for ingress-nginx controller
data:
ssl-protocols: "TLSv1.2 TLSv1.3"
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305"
ssl-prefer-server-ciphers: "off"
(Those are the ingress-nginx docs’ modern defaults; TLS 1.3 ciphers are implicit.) (kubernetes.github.io)
Why your TLS info looks different
- Local: likely direct to your app or local reverse proxy → negotiates TLS 1.3 (
TLS_AES_128_GCM_SHA256is one of the three TLS 1.3 ciphers). - EKS: you’re probably terminating at ALB (or an older policy) → negotiated TLS 1.2 (
TLS_ECDHE_ECDSA_WITH_AES_128_…). That’s still strong if you use a modern ALB policy. (AWS Documentation)
Quick 403 triage checklist
- AWS WAF attached? Check WAF logs for the rule ID; 403s here are common and configurable. (Repost, Amazon Web Services, Inc.)
- Auth headers missing? Ingress/controller or a proxy hop might be stripping/renaming
Authorization. Confirm withcurl -v -H "Authorization: Bearer …". - Host/Path mismatch? Ingress
host:and path rules must match exactly; wrongHostyields 4xx at ingress. - IP allowlist / Geo rules? Compare client IP vs policy.
- Kong/OIDC/JWT plugin? Kong often 403s on invalid/absent token or unauthorized consumer; check Kong logs for the route and plugin decision.
- CORS preflight? Browsers can show 403 on failed
OPTIONS—ensure ingress routesOPTIONS.
If you share:
- one ALB access log line for a blocked request, and/or
- the response headers from
curl -i -von EKS,
I’ll tell you precisely whether the 403 is from ALB/WAF or from your Ingress/Kong/app—and the exact knob to turn.
: locally you can call the on-prem API just fine, but from EKS you get 403.
Handshake is OK (so network + TLS basics are fine). This is almost never a cipher issue—403 is an L7 decision (WAF/edge/Kong/app/proxy policy).
Here’s a tight plan to pinpoint and fix it.
1) Identify who is returning the 403 (one command from EKS)
Run from an EKS pod (ephemeral curl pod shown below) and inspect headers:
kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
sh -c 'curl -i -v https://your.onprem.host/path -H "Authorization: Bearer $TOKEN" --max-time 20'
Look for these telltales in the 403 response:
- Cloudflare:
server: cloudflare,cf-ray,cf-cache-status - F5:
server: BigIPor ASM headers likeX-WA-Info - Kong:
server: kong,x-kong-* - Proxy (corp/Zscaler/etc.):
Via,X-BlueCoat-Via, custom proxy headers - Your app/ingress: app-specific headers, or none
If your pod must use an egress proxy, also try explicitly through it:
kubectl run curlp --rm -it --image=curlimages/curl:8.8.0 -- \
sh -c 'curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT --max-time 20'
2) Fix by source
A) WAF/Edge (Cloudflare/F5) is denying EKS
- Likely IP allowlist/geo/ASN or bot rule.
Fix: Allowlist your EKS NAT/Egress IP(s) or relax/adjust the specific WAF rule; confirm in WAF/Firewall logs.
B) Corporate egress proxy is denying
- From EKS, traffic goes via a corp proxy that enforces access policy or strips
Authorization.
Fix:- Whitelist the destination domain on the proxy, or
- Set
NO_PROXY=your.onprem.hoston the workload (if policy allows), or - Ensure the proxy allows the path and forwards all headers (esp.
Authorization). - If the proxy does TLS inspection + your server expects client mTLS, either bypass inspection for this domain or use a two-hop mTLS model.
C) Kong/Ingress/app is denying
- Common causes: missing/invalid token, ACL/IP allowlist plugin, wrong
Hostheader/SNI, or CORS preflight.
Fix:- Verify the request from EKS includes the same auth headers as local (env var or secret may be missing).
- If calling by IP, set the correct Host header:
curl -H "Host: your.onprem.host" --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/... - Check Kong logs for route/plugin decision (look for
consumer,route,service, plugin names).
3) Quick differentials to run from EKS
- Compare with and without proxy env:
# Without proxy (if allowed)
curl -i -v https://your.onprem.host/path
# Via proxy
curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT
- Preserve Host/SNI if you must target an IP:
curl -i -v --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/path
- Include your real auth:
curl -i -v -H "Authorization: Bearer $TOKEN" https://your.onprem.host/path
- See who really answered:
CheckServer,Via,cf-ray,x-kong*,X-Envoy-*, etc., in the 403 response.
4) About ciphers/TLS
Use modern policies that allow TLS 1.3 + 1.2 with ECDHE + AES-GCM/CHACHA. The exact cipher (e.g., TLSv1.3 TLS_AES_128_GCM_SHA256 locally vs TLSv1.2 ECDHE_*_AES_128_GCM_SHA256 from EKS) will not cause a 403. It’s fine to keep both TLS 1.2 and 1.3 enabled; the fix lies in the access/authorization layer, not the cipher.
If you paste the 403 response headers from the EKS curl (no body needed), I can tell you in one shot whether it’s WAF/edge, proxy, Kong, or your app, and give the exact knob to turn (allowlist, header forward, plugin, or bypass).