Got it — here’s a clean, practical way to wire Client → Cloudflare → F5 → Kong → Upstream so your normal Kong flowClient → [Kong Route matched] → [Service invoked] → Upstream API works end-to-end.
0) Topology (recommended)
Client ─TLS─> Cloudflare ─TLS─> F5 VIP ─HTTP─> Kong (8000) ─(HTTP/HTTPS)─> Upstream
- Cloudflare terminates client TLS + WAF/bot/DDOS.
- F5 is your “origin” for Cloudflare, then load balances across Kong nodes.
- Kong routes/plugins/auth, then proxies to your upstream.
You can also do F5→Kong over HTTPS (8443). Start with HTTP (simpler), add re-encryption later.
1) Cloudflare (edge) setup
- DNS:
api.example.com→ orange-cloud (proxied) to your F5 public IP. - SSL/TLS: set mode Full (strict).
- Authenticated Origin Pull (AOP):
- Enable it in Cloudflare.
- On F5, require the Cloudflare client cert (see F5 step) so only CF can hit your VIP.
- Origin server certificate (on F5): either
- a normal public cert (LetsEncrypt, etc.), or
- a Cloudflare Origin Certificate (valid only to CF; fine if you never bypass CF).
- API cache rules: bypass cache for
/api/*, enable WebSockets if you use them.
2) F5 LTM (VIP to Kong) essentials
Virtual server (HTTPS on 443) → pool (Kong nodes on 8000)
- Client SSL profile: present your cert (public or CF Origin Cert).
- (Optional but recommended) verify Cloudflare client cert for AOP:
import Cloudflare Origin Pull CA on F5 and set it as Trusted CA; require client cert. - HTTP profile: enable; Insert X-Forwarded-For; preserve headers.
- OneConnect: enable for keep-alives to Kong.
- Pool members: all Kong nodes, port 8000 (or 8443 if you re-encrypt).
- Health monitor: HTTP GET to Kong status (see Kong step).
Header hygiene (iRule – optional, if you want to be explicit):
when HTTP_REQUEST {
# Preserve real client IP from Cloudflare into X-Forwarded-For
if { [HTTP::header exists "CF-Connecting-IP"] } {
set cfip [HTTP::header value "CF-Connecting-IP"]
if { [HTTP::header exists "X-Forwarded-For"] } {
HTTP::header replace "X-Forwarded-For" "[HTTP::header value X-Forwarded-For], $cfip"
} else {
HTTP::header insert "X-Forwarded-For" $cfip
}
}
HTTP::header replace "X-Forwarded-Proto" "https"
}
(Or just enable “X-Forwarded-For: append” in the HTTP profile and set XFP via policy.)
3) Kong Gateway settings (behind F5)
Environment (docker-compose/env vars):
KONG_PROXY_LISTEN=0.0.0.0:8000
# Trust only your F5 addresses/CIDRs (do NOT trust 0.0.0.0/0)
KONG_TRUSTED_IPS=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,<F5_VIP_or_SNATs>
KONG_REAL_IP_HEADER=X-Forwarded-For
KONG_REAL_IP_RECURSIVE=on
# helpful during testing
KONG_HEADERS=latency_tokens
# health endpoint for F5 monitor (internal only)
KONG_STATUS_LISTEN=0.0.0.0:8100
F5 health monitor target (HTTP):
- URL:
http://<kong-node>:8100/status - Expect:
200(Kong’s status port; safe to expose internally)
Define Service/Route (example):
# Service to your upstream (HTTP)
curl -sX POST :8001/services \
-d name=orders-svc \
-d host=tomcat-app \
-d port=8080 \
-d protocol=http
# Route that matches your API path/host
curl -sX POST :8001/routes \
-d service.name=orders-svc \
-d hosts[]=api.example.com \
-d paths[]=/v1
(If upstream is HTTPS, set protocol=https, port=443, and sni=upstream.host.)
4) Putting it together (request path)
- Client →
https://api.example.com/v1/... - Cloudflare terminates TLS, adds
CF-Connecting-IP, forwards to F5. - F5 validates CF client cert (AOP), appends XFF, sets
X-Forwarded-Proto:https, LB to a Kong node. - Kong trusts F5 IP, extracts the real client IP from XFF, matches Route (
Host=api.example.com,path=/v1), invokes Service, proxies to Upstream. - Response flows back; Kong adds latency headers (if enabled); F5 returns to CF; CF returns to client.
5) Testing (in layers)
- Direct to Kong (bypass CF/F5) on the private network:
curl -i -H 'Host: api.example.com' http://<kong-node>:8000/v1/ping -H 'Kong-Debug: 1' - Through F5 VIP (simulate Cloudflare off):
curl -i -H 'Host: api.example.com' https://<f5-public-ip>/v1/ping --resolve api.example.com:<f5-public-ip>:443 - Through Cloudflare (real path):
curl -i https://api.example.com/v1/ping -H 'Kong-Debug: 1'
Check headers:
X-Kong-Upstream-Latency(upstream time)X-Kong-Proxy-Latency(Kong internal)- Logs in Kong show
client_ip= real client IP (not F5/CF) if trusted IPs are correct.
6) Observability (recommended)
- Kong plugins:
prometheus(metrics),correlation-id(X-Request-ID),http-logorfile-log→ ELK.
- F5: request logging profile to ELK; LTM stats.
- Cloudflare: Security events/Edge logs (if plan allows).
7) Common gotchas (and quick fixes)
- Client IP shows F5/Cloudflare → set
KONG_TRUSTED_IPS,REAL_IP_*; ensure F5 appends XFF and passesCF-Connecting-IP. - OIDC header lost → make sure F5 policy does not strip
Authorization. - Wrong redirects / mixed content → ensure
X-Forwarded-Proto: httpsreaches Kong & upstream. - “Server busy” / spikes → raise F5 OneConnect pool, ensure Kong
backlog/somaxconn, keepalive to upstream. - Health checks hitting Admin API → use
KONG_STATUS_LISTEN(8100) instead, not port 8001 (Admin).
If you want, I can:
- generate a Kong docker-compose block with those envs,
- give you an F5 monitor config snippet,
- and add a sample OIDC plugin + rate-limit plugin to the route for a production-like baseline.