KONG – 504

Common Causes

  1. Upstream service is slow or down
  2. DNS resolution delay or failure
  3. Incorrect protocol (HTTP vs HTTPS)
  4. Upstream timeout too low in Kong config
  5. Load balancer or firewall in between is blocking/delaying

If you’re seeing upstream_status=504 in Kong logs, it means:

Kong sent the request to the upstream (your backend service), but did not receive a response within the configured timeout, so it returned a 504 Gateway Timeout to the client.


πŸ” What to Check When You See upstream_status=504

βœ… 1. Verify Upstream is Healthy

Try to access your upstream from the Kong host directly:

curl -v http://<upstream-host>:<port>/<endpoint>

  • If this is slow or hangs β†’ your backend is the problem.
  • If it fails to connect β†’ network or DNS issue.

βœ… 2. Increase Kong Timeouts

Timeouts are per-service. You can increase them using the Admin API:

curl -X PATCH http://localhost:8001/services/{service_name} \
  --data "connect_timeout=10000" \
  --data "read_timeout=15000" \
  --data "write_timeout=10000"

All values are in milliseconds.


βœ… 3. Check Protocol Mismatch (HTTP vs HTTPS)

If your upstream uses HTTPS but the service is configured with protocol = http, Kong cannot complete the TLS handshake and may hang.

Fix it with:

curl -X PATCH http://localhost:8001/services/{service_name} \
  --data "protocol=https"


βœ… 4. Check Kong Error Log

Run:

tail -n 100 /usr/local/kong/logs/error.log | grep timeout

You may see:

[error] ... upstream timed out (110: Connection timed out) while reading response header from upstream


βœ… 5. Check Health of Upstream Targets (If Using Upstreams)

curl http://localhost:8001/upstreams/{upstream_name}/health

If all targets are unhealthy, Kong will return 504s.


βœ… 6. Review Load Balancer / Firewall Rules

If Kong goes through:

  • A corporate firewall
  • A cloud load balancer
  • An IPsec/VPN tunnel

Timeouts or dropped packets there can also lead to 504.


πŸ“˜ Example Full Troubleshooting Flow

  1. Run from Kong host/container: curl -v http://your-upstream-host:port/health
  2. If that times out:
    • Check if service is running.
    • Check firewall rules.
  3. Increase Kong timeouts: curl -X PATCH http://localhost:8001/services/my-api \ --data "read_timeout=20000"
  4. Check logs: tail -f /usr/local/kong/logs/error.log

If you paste the exact Kong service config and a log snippet with upstream_status=504, I can give you a more specific fix.

A 504 Gateway Timeout from Kong means:

πŸ”΄ Kong Gateway sent a request to your upstream service, but didn’t get a response in time, so it gave up and returned a 504 to the client.


βœ… QUICK FIX CHECKLIST

CheckAction
⏱ Timeout too short?Increase timeout values for the service
πŸ”Œ Backend down?Test the backend directly from Kong
❌ Wrong protocol?Ensure the protocol is set correctly (http vs https)
πŸ” Firewall?Check network/firewall between Kong and upstream
πŸ“‰ Backend overloaded?Review CPU/memory on backend service
πŸ” Load balancer in the path?Check intermediate LB timeouts

πŸ§ͺ 1. Test the Backend Directly

From Kong host or container:

curl -v http://<upstream-host>:<port>/health

If this hangs or fails, the issue is upstreamβ€”not Kong.


πŸ› οΈ 2. Increase Kong Timeouts

Use the Admin API:

curl -X PATCH http://localhost:8001/services/{service_name} \
  --data "connect_timeout=10000" \
  --data "read_timeout=15000" \
  --data "write_timeout=10000"

(All values are in milliseconds.)


πŸ” 3. Check Protocol Mismatch

Is your backend using HTTPS but your service uses HTTP?

Fix:

curl -X PATCH http://localhost:8001/services/{service_name} \
  --data "protocol=https"


πŸ“„ 4. Check Kong Logs

Look in:

/usr/local/kong/logs/error.log

Typical timeout error:

upstream timed out (110: Connection timed out) while reading response header from upstream


🧰 5. Enable Correlation ID for Tracing

Enable this to track requests:

curl -X POST http://localhost:8001/plugins \
  --data "name=correlation-id" \
  --data "config.header_name=X-Correlation-ID" \
  --data "config.generator=uuid" \
  --data "config.echo_downstream=true"

Now check logs using the X-Correlation-ID.


If you send your service config or a log example with upstream_status=504, I’ll pinpoint the issue faster.

The error “An invalid response from upstream server” in Kong typically means:

Kong was able to connect to the upstream service, but the response it got back was malformed, empty, or violated protocol expectations.


πŸ” Common Root Causes

CauseDescription
❌ Empty responseUpstream sent no body or headers back
⚠️ Invalid HTTP formatResponse doesn’t follow HTTP spec
πŸ” SSL/TLS mismatchHTTP used with HTTPS backend or vice versa
🧱 Protocol errorUpstream speaking gRPC, WebSocket, or another protocol unexpectedly
πŸ”Œ Upstream closed connectionBefore Kong finished reading the response

βœ… How to Troubleshoot

1. Check Logs

Look for this in Kong’s error log:

tail -f /usr/local/kong/logs/error.log

Common message:

upstream prematurely closed connection while reading response header

2. Test Upstream Manually

Run from the Kong host or container:

curl -i http://<upstream-host>:<port>/<path>

If you get no response or a malformed one, the problem is the upstream.


3. Verify Protocol

Make sure you’re not calling an HTTPS backend as HTTP:

# Fix protocol
curl -X PATCH http://localhost:8001/services/{service_name} \
  --data "protocol=https"


4. Increase Read Timeout

Sometimes upstreams are just too slow:

curl -X PATCH http://localhost:8001/services/{service_name} \
  --data "read_timeout=15000"


5. Check Upstream for Non-Standard Response

If the upstream returns:

  • Non-HTTP data (e.g., binary, gRPC without HTTP/1.1 framing)
  • Incorrect headers
  • Incomplete response

It will break Kong’s HTTP parser.


πŸ§ͺ Bonus: Use Request Termination for Testing

Temporarily override the upstream to test Kong behavior:

curl -X POST http://localhost:8001/services/{service_name}/plugins \
  --data "name=request-termination" \
  --data "config.status_code=200"

If this works, the issue is 100% upstream-related.


If you can share the log snippet or the upstream service config (or test curl output), I can give you a pinpointed fix.

Leave a comment