Common Causes
- Upstream service is slow or down
- DNS resolution delay or failure
- Incorrect protocol (HTTP vs HTTPS)
- Upstream timeout too low in Kong config
- Load balancer or firewall in between is blocking/delaying
If you’re seeing upstream_status=504 in Kong logs, it means:
Kong sent the request to the upstream (your backend service), but did not receive a response within the configured timeout, so it returned a
504 Gateway Timeoutto the client.
π What to Check When You See upstream_status=504
β 1. Verify Upstream is Healthy
Try to access your upstream from the Kong host directly:
curl -v http://<upstream-host>:<port>/<endpoint>
- If this is slow or hangs β your backend is the problem.
- If it fails to connect β network or DNS issue.
β 2. Increase Kong Timeouts
Timeouts are per-service. You can increase them using the Admin API:
curl -X PATCH http://localhost:8001/services/{service_name} \
--data "connect_timeout=10000" \
--data "read_timeout=15000" \
--data "write_timeout=10000"
All values are in milliseconds.
β 3. Check Protocol Mismatch (HTTP vs HTTPS)
If your upstream uses HTTPS but the service is configured with protocol = http, Kong cannot complete the TLS handshake and may hang.
Fix it with:
curl -X PATCH http://localhost:8001/services/{service_name} \
--data "protocol=https"
β 4. Check Kong Error Log
Run:
tail -n 100 /usr/local/kong/logs/error.log | grep timeout
You may see:
[error] ... upstream timed out (110: Connection timed out) while reading response header from upstream
β 5. Check Health of Upstream Targets (If Using Upstreams)
curl http://localhost:8001/upstreams/{upstream_name}/health
If all targets are unhealthy, Kong will return 504s.
β 6. Review Load Balancer / Firewall Rules
If Kong goes through:
- A corporate firewall
- A cloud load balancer
- An IPsec/VPN tunnel
Timeouts or dropped packets there can also lead to 504.
π Example Full Troubleshooting Flow
- Run from Kong host/container:
curl -v http://your-upstream-host:port/health - If that times out:
- Check if service is running.
- Check firewall rules.
- Increase Kong timeouts:
curl -X PATCH http://localhost:8001/services/my-api \ --data "read_timeout=20000" - Check logs:
tail -f /usr/local/kong/logs/error.log
If you paste the exact Kong service config and a log snippet with upstream_status=504, I can give you a more specific fix.
A 504 Gateway Timeout from Kong means:
π΄ Kong Gateway sent a request to your upstream service, but didnβt get a response in time, so it gave up and returned a
504to the client.
β QUICK FIX CHECKLIST
| Check | Action |
|---|---|
| β± Timeout too short? | Increase timeout values for the service |
| π Backend down? | Test the backend directly from Kong |
| β Wrong protocol? | Ensure the protocol is set correctly (http vs https) |
| π Firewall? | Check network/firewall between Kong and upstream |
| π Backend overloaded? | Review CPU/memory on backend service |
| π Load balancer in the path? | Check intermediate LB timeouts |
π§ͺ 1. Test the Backend Directly
From Kong host or container:
curl -v http://<upstream-host>:<port>/health
If this hangs or fails, the issue is upstreamβnot Kong.
π οΈ 2. Increase Kong Timeouts
Use the Admin API:
curl -X PATCH http://localhost:8001/services/{service_name} \
--data "connect_timeout=10000" \
--data "read_timeout=15000" \
--data "write_timeout=10000"
(All values are in milliseconds.)
π 3. Check Protocol Mismatch
Is your backend using HTTPS but your service uses HTTP?
Fix:
curl -X PATCH http://localhost:8001/services/{service_name} \
--data "protocol=https"
π 4. Check Kong Logs
Look in:
/usr/local/kong/logs/error.log
Typical timeout error:
upstream timed out (110: Connection timed out) while reading response header from upstream
π§° 5. Enable Correlation ID for Tracing
Enable this to track requests:
curl -X POST http://localhost:8001/plugins \
--data "name=correlation-id" \
--data "config.header_name=X-Correlation-ID" \
--data "config.generator=uuid" \
--data "config.echo_downstream=true"
Now check logs using the X-Correlation-ID.
If you send your service config or a log example with upstream_status=504, Iβll pinpoint the issue faster.
The error “An invalid response from upstream server” in Kong typically means:
Kong was able to connect to the upstream service, but the response it got back was malformed, empty, or violated protocol expectations.
π Common Root Causes
| Cause | Description |
|---|---|
| β Empty response | Upstream sent no body or headers back |
| β οΈ Invalid HTTP format | Response doesn’t follow HTTP spec |
| π SSL/TLS mismatch | HTTP used with HTTPS backend or vice versa |
| π§± Protocol error | Upstream speaking gRPC, WebSocket, or another protocol unexpectedly |
| π Upstream closed connection | Before Kong finished reading the response |
β How to Troubleshoot
1. Check Logs
Look for this in Kongβs error log:
tail -f /usr/local/kong/logs/error.log
Common message:
upstream prematurely closed connection while reading response header
2. Test Upstream Manually
Run from the Kong host or container:
curl -i http://<upstream-host>:<port>/<path>
If you get no response or a malformed one, the problem is the upstream.
3. Verify Protocol
Make sure you’re not calling an HTTPS backend as HTTP:
# Fix protocol
curl -X PATCH http://localhost:8001/services/{service_name} \
--data "protocol=https"
4. Increase Read Timeout
Sometimes upstreams are just too slow:
curl -X PATCH http://localhost:8001/services/{service_name} \
--data "read_timeout=15000"
5. Check Upstream for Non-Standard Response
If the upstream returns:
- Non-HTTP data (e.g., binary, gRPC without HTTP/1.1 framing)
- Incorrect headers
- Incomplete response
It will break Kongβs HTTP parser.
π§ͺ Bonus: Use Request Termination for Testing
Temporarily override the upstream to test Kong behavior:
curl -X POST http://localhost:8001/services/{service_name}/plugins \
--data "name=request-termination" \
--data "config.status_code=200"
If this works, the issue is 100% upstream-related.
If you can share the log snippet or the upstream service config (or test curl output), I can give you a pinpointed fix.